00:00:00.001 Started by upstream project "autotest-per-patch" build number 130915 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.277 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.277 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.238 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.250 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.263 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.263 > git config core.sparsecheckout # timeout=10 00:00:04.274 > git read-tree -mu HEAD # timeout=10 00:00:04.289 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.304 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.304 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.380 [Pipeline] Start of Pipeline 00:00:04.393 [Pipeline] library 00:00:04.394 Loading library shm_lib@master 00:00:04.395 Library shm_lib@master is cached. Copying from home. 00:00:04.411 [Pipeline] node 00:00:04.434 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.435 [Pipeline] { 00:00:04.443 [Pipeline] catchError 00:00:04.444 [Pipeline] { 00:00:04.454 [Pipeline] wrap 00:00:04.461 [Pipeline] { 00:00:04.468 [Pipeline] stage 00:00:04.470 [Pipeline] { (Prologue) 00:00:04.659 [Pipeline] sh 00:00:05.576 + logger -p user.info -t JENKINS-CI 00:00:05.610 [Pipeline] echo 00:00:05.611 Node: CYP12 00:00:05.616 [Pipeline] sh 00:00:05.957 [Pipeline] setCustomBuildProperty 00:00:05.968 [Pipeline] echo 00:00:05.970 Cleanup processes 00:00:05.975 [Pipeline] sh 00:00:06.273 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.273 38777 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.288 [Pipeline] sh 00:00:06.583 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.583 ++ grep -v 'sudo pgrep' 00:00:06.583 ++ awk '{print $1}' 00:00:06.583 + sudo kill -9 00:00:06.583 + true 00:00:06.602 [Pipeline] cleanWs 00:00:06.613 [WS-CLEANUP] Deleting project workspace... 00:00:06.613 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.645 [WS-CLEANUP] done 00:00:06.661 [Pipeline] setCustomBuildProperty 00:00:06.672 [Pipeline] sh 00:00:06.964 + sudo git config --global --replace-all safe.directory '*' 00:00:07.048 [Pipeline] httpRequest 00:00:09.390 [Pipeline] echo 00:00:09.392 Sorcerer 10.211.164.101 is alive 00:00:09.400 [Pipeline] retry 00:00:09.402 [Pipeline] { 00:00:09.413 [Pipeline] httpRequest 00:00:09.418 HttpMethod: GET 00:00:09.419 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.420 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.428 Response Code: HTTP/1.1 200 OK 00:00:09.429 Success: Status code 200 is in the accepted range: 200,404 00:00:09.429 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:37.713 [Pipeline] } 00:00:37.730 [Pipeline] // retry 00:00:37.737 [Pipeline] sh 00:00:38.039 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:38.052 [Pipeline] httpRequest 00:00:38.414 [Pipeline] echo 00:00:38.415 Sorcerer 10.211.164.101 is alive 00:00:38.422 [Pipeline] retry 00:00:38.424 [Pipeline] { 00:00:38.434 [Pipeline] httpRequest 00:00:38.438 HttpMethod: GET 00:00:38.438 URL: http://10.211.164.101/packages/spdk_52e9db7222c01b22348c13a11abc05c51f6afc75.tar.gz 00:00:38.439 Sending request to url: http://10.211.164.101/packages/spdk_52e9db7222c01b22348c13a11abc05c51f6afc75.tar.gz 00:00:38.474 Response Code: HTTP/1.1 200 OK 00:00:38.474 Success: Status code 200 is in the accepted range: 200,404 00:00:38.474 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_52e9db7222c01b22348c13a11abc05c51f6afc75.tar.gz 00:05:20.606 [Pipeline] } 00:05:20.624 [Pipeline] // retry 00:05:20.633 [Pipeline] sh 00:05:20.940 + tar --no-same-owner -xf spdk_52e9db7222c01b22348c13a11abc05c51f6afc75.tar.gz 00:05:24.273 [Pipeline] sh 00:05:24.574 + git -C spdk log --oneline -n5 00:05:24.574 52e9db722 util: allow a fd_group to manage all its fds 00:05:24.574 6082eddb0 util: fix total fds to wait for 00:05:24.574 8ce2f3c7d util: handle events for vfio fd type 00:05:24.574 381b6895f util: Extended options for spdk_fd_group_add 00:05:24.574 42d568143 nvme: interface to retrieve fd for a queue 00:05:24.589 [Pipeline] } 00:05:24.604 [Pipeline] // stage 00:05:24.614 [Pipeline] stage 00:05:24.616 [Pipeline] { (Prepare) 00:05:24.635 [Pipeline] writeFile 00:05:24.653 [Pipeline] sh 00:05:24.948 + logger -p user.info -t JENKINS-CI 00:05:24.963 [Pipeline] sh 00:05:25.261 + logger -p user.info -t JENKINS-CI 00:05:25.277 [Pipeline] sh 00:05:25.572 + cat autorun-spdk.conf 00:05:25.572 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.572 SPDK_TEST_NVMF=1 00:05:25.572 SPDK_TEST_NVME_CLI=1 00:05:25.572 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:25.572 SPDK_TEST_NVMF_NICS=e810 00:05:25.572 SPDK_TEST_VFIOUSER=1 00:05:25.572 SPDK_RUN_UBSAN=1 00:05:25.572 NET_TYPE=phy 00:05:25.582 RUN_NIGHTLY=0 00:05:25.587 [Pipeline] readFile 00:05:25.649 [Pipeline] withEnv 00:05:25.652 [Pipeline] { 00:05:25.665 [Pipeline] sh 00:05:25.959 + set -ex 00:05:25.959 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:25.959 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:25.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.959 ++ SPDK_TEST_NVMF=1 00:05:25.959 ++ SPDK_TEST_NVME_CLI=1 00:05:25.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:25.959 ++ SPDK_TEST_NVMF_NICS=e810 00:05:25.959 ++ SPDK_TEST_VFIOUSER=1 00:05:25.959 ++ SPDK_RUN_UBSAN=1 00:05:25.959 ++ NET_TYPE=phy 00:05:25.959 ++ RUN_NIGHTLY=0 00:05:25.959 + case $SPDK_TEST_NVMF_NICS in 00:05:25.959 + DRIVERS=ice 00:05:25.959 + [[ tcp == \r\d\m\a ]] 00:05:25.959 + [[ -n ice ]] 00:05:25.959 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:25.959 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:34.126 rmmod: ERROR: Module irdma is not currently loaded 00:05:34.126 rmmod: ERROR: Module i40iw is not currently loaded 00:05:34.126 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:34.126 + true 00:05:34.126 + for D in $DRIVERS 00:05:34.126 + sudo modprobe ice 00:05:34.126 + exit 0 00:05:34.138 [Pipeline] } 00:05:34.155 [Pipeline] // withEnv 00:05:34.161 [Pipeline] } 00:05:34.175 [Pipeline] // stage 00:05:34.185 [Pipeline] catchError 00:05:34.186 [Pipeline] { 00:05:34.200 [Pipeline] timeout 00:05:34.200 Timeout set to expire in 1 hr 0 min 00:05:34.202 [Pipeline] { 00:05:34.215 [Pipeline] stage 00:05:34.217 [Pipeline] { (Tests) 00:05:34.232 [Pipeline] sh 00:05:34.527 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:34.527 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:34.527 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:34.527 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:34.527 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:34.527 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:34.527 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:34.527 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:34.527 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:34.527 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:34.527 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:34.527 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:34.527 + source /etc/os-release 00:05:34.527 ++ NAME='Fedora Linux' 00:05:34.528 ++ VERSION='39 (Cloud Edition)' 00:05:34.528 ++ ID=fedora 00:05:34.528 ++ VERSION_ID=39 00:05:34.528 ++ VERSION_CODENAME= 00:05:34.528 ++ PLATFORM_ID=platform:f39 00:05:34.528 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:34.528 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:34.528 ++ LOGO=fedora-logo-icon 00:05:34.528 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:34.528 ++ HOME_URL=https://fedoraproject.org/ 00:05:34.528 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:34.528 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:34.528 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:34.528 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:34.528 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:34.528 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:34.528 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:34.528 ++ SUPPORT_END=2024-11-12 00:05:34.528 ++ VARIANT='Cloud Edition' 00:05:34.528 ++ VARIANT_ID=cloud 00:05:34.528 + uname -a 00:05:34.528 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:34.528 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:37.840 Hugepages 00:05:37.840 node hugesize free / total 00:05:37.840 node0 1048576kB 0 / 0 00:05:37.840 node0 2048kB 0 / 0 00:05:37.840 node1 1048576kB 0 / 0 00:05:37.840 node1 2048kB 0 / 0 00:05:37.840 00:05:37.840 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.840 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:37.840 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:37.840 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:37.840 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:37.840 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:37.840 + rm -f /tmp/spdk-ld-path 00:05:37.840 + source autorun-spdk.conf 00:05:37.840 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:37.840 ++ SPDK_TEST_NVMF=1 00:05:37.840 ++ SPDK_TEST_NVME_CLI=1 00:05:37.840 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:37.840 ++ SPDK_TEST_NVMF_NICS=e810 00:05:37.840 ++ SPDK_TEST_VFIOUSER=1 00:05:37.840 ++ SPDK_RUN_UBSAN=1 00:05:37.840 ++ NET_TYPE=phy 00:05:37.840 ++ RUN_NIGHTLY=0 00:05:37.840 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:37.840 + [[ -n '' ]] 00:05:37.840 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.840 + for M in /var/spdk/build-*-manifest.txt 00:05:37.840 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:37.840 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:37.840 + for M in /var/spdk/build-*-manifest.txt 00:05:37.840 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:37.840 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:37.840 + for M in /var/spdk/build-*-manifest.txt 00:05:37.840 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:37.840 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:37.840 ++ uname 00:05:37.840 + [[ Linux == \L\i\n\u\x ]] 00:05:37.840 + sudo dmesg -T 00:05:37.840 + sudo dmesg --clear 00:05:37.840 + dmesg_pid=40910 00:05:37.840 + [[ Fedora Linux == FreeBSD ]] 00:05:37.840 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:37.840 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:37.840 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:37.840 + sudo dmesg -Tw 00:05:37.840 + [[ -x /usr/src/fio-static/fio ]] 00:05:37.840 + export FIO_BIN=/usr/src/fio-static/fio 00:05:37.840 + FIO_BIN=/usr/src/fio-static/fio 00:05:37.840 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:37.840 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:37.840 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:37.840 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:37.840 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:37.840 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:37.840 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:37.840 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:37.840 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:37.840 Test configuration: 00:05:37.840 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:37.840 SPDK_TEST_NVMF=1 00:05:37.840 SPDK_TEST_NVME_CLI=1 00:05:37.840 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:37.840 SPDK_TEST_NVMF_NICS=e810 00:05:37.840 SPDK_TEST_VFIOUSER=1 00:05:37.840 SPDK_RUN_UBSAN=1 00:05:37.840 NET_TYPE=phy 00:05:37.840 RUN_NIGHTLY=0 17:21:29 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:05:37.840 17:21:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.840 17:21:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:37.840 17:21:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:37.840 17:21:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.840 17:21:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.840 17:21:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.840 17:21:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.840 17:21:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.840 17:21:29 -- paths/export.sh@5 -- $ export PATH 00:05:37.840 17:21:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.840 17:21:29 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:37.840 17:21:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:38.103 17:21:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728400889.XXXXXX 00:05:38.103 17:21:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728400889.VDb6We 00:05:38.103 17:21:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:38.103 17:21:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:38.103 17:21:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:38.103 17:21:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:38.103 17:21:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:38.103 17:21:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:38.103 17:21:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:38.103 17:21:29 -- common/autotest_common.sh@10 -- $ set +x 00:05:38.103 17:21:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:38.103 17:21:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:38.103 17:21:29 -- pm/common@17 -- $ local monitor 00:05:38.103 17:21:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.103 17:21:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.103 17:21:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.103 17:21:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.103 17:21:29 -- pm/common@21 -- $ date +%s 00:05:38.103 17:21:29 -- pm/common@25 -- $ sleep 1 00:05:38.103 17:21:29 -- pm/common@21 -- $ date +%s 00:05:38.103 17:21:29 -- pm/common@21 -- $ date +%s 00:05:38.103 17:21:29 -- pm/common@21 -- $ date +%s 00:05:38.103 17:21:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728400889 00:05:38.103 17:21:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728400889 00:05:38.103 17:21:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728400889 00:05:38.103 17:21:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728400889 00:05:38.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728400889_collect-cpu-load.pm.log 00:05:38.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728400889_collect-vmstat.pm.log 00:05:38.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728400889_collect-cpu-temp.pm.log 00:05:38.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728400889_collect-bmc-pm.bmc.pm.log 00:05:39.049 17:21:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:39.049 17:21:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:39.049 17:21:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:39.049 17:21:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.049 17:21:30 -- spdk/autobuild.sh@16 -- $ date -u 00:05:39.049 Tue Oct 8 03:21:30 PM UTC 2024 00:05:39.049 17:21:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:39.049 v25.01-pre-50-g52e9db722 00:05:39.049 17:21:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:39.049 17:21:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:39.049 17:21:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:39.049 17:21:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:39.049 17:21:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:39.049 17:21:30 -- common/autotest_common.sh@10 -- $ set +x 00:05:39.049 ************************************ 00:05:39.049 START TEST ubsan 00:05:39.049 ************************************ 00:05:39.049 17:21:30 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:05:39.049 using ubsan 00:05:39.049 00:05:39.049 real 0m0.001s 00:05:39.049 user 0m0.000s 00:05:39.049 sys 0m0.000s 00:05:39.049 17:21:30 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:39.049 17:21:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:39.049 ************************************ 00:05:39.049 END TEST ubsan 00:05:39.049 ************************************ 00:05:39.049 17:21:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:39.049 17:21:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:39.049 17:21:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:39.049 17:21:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:39.623 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:39.623 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:40.568 Using 'verbs' RDMA provider 00:05:56.888 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:11.926 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:11.926 Creating mk/config.mk...done. 00:06:11.926 Creating mk/cc.flags.mk...done. 00:06:11.926 Type 'make' to build. 00:06:11.926 17:22:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:11.926 17:22:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:11.926 17:22:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:11.926 17:22:03 -- common/autotest_common.sh@10 -- $ set +x 00:06:11.926 ************************************ 00:06:11.926 START TEST make 00:06:11.926 ************************************ 00:06:11.926 17:22:03 make -- common/autotest_common.sh@1125 -- $ make -j144 00:06:11.926 make[1]: Nothing to be done for 'all'. 00:06:14.483 The Meson build system 00:06:14.483 Version: 1.5.0 00:06:14.483 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:14.483 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:14.483 Build type: native build 00:06:14.483 Project name: libvfio-user 00:06:14.483 Project version: 0.0.1 00:06:14.483 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:14.483 C linker for the host machine: cc ld.bfd 2.40-14 00:06:14.483 Host machine cpu family: x86_64 00:06:14.483 Host machine cpu: x86_64 00:06:14.483 Run-time dependency threads found: YES 00:06:14.483 Library dl found: YES 00:06:14.483 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:14.483 Run-time dependency json-c found: YES 0.17 00:06:14.483 Run-time dependency cmocka found: YES 1.1.7 00:06:14.483 Program pytest-3 found: NO 00:06:14.483 Program flake8 found: NO 00:06:14.483 Program misspell-fixer found: NO 00:06:14.483 Program restructuredtext-lint found: NO 00:06:14.483 Program valgrind found: YES (/usr/bin/valgrind) 00:06:14.483 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:14.483 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:14.483 Compiler for C supports arguments -Wwrite-strings: YES 00:06:14.483 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:14.483 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:14.483 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:14.483 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:14.483 Build targets in project: 8 00:06:14.483 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:14.483 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:14.483 00:06:14.483 libvfio-user 0.0.1 00:06:14.483 00:06:14.483 User defined options 00:06:14.483 buildtype : debug 00:06:14.483 default_library: shared 00:06:14.483 libdir : /usr/local/lib 00:06:14.483 00:06:14.483 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:14.483 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:14.483 [1/37] Compiling C object samples/null.p/null.c.o 00:06:14.483 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:14.483 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:14.483 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:14.483 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:14.483 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:14.483 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:14.483 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:14.483 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:14.483 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:14.483 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:14.483 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:14.483 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:14.483 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:14.483 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:14.483 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:14.483 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:14.483 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:14.483 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:14.483 [20/37] Compiling C object samples/server.p/server.c.o 00:06:14.483 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:14.483 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:14.483 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:14.483 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:14.483 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:14.483 [26/37] Compiling C object samples/client.p/client.c.o 00:06:14.745 [27/37] Linking target samples/client 00:06:14.745 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:14.745 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:14.745 [30/37] Linking target test/unit_tests 00:06:14.745 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:06:15.008 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:15.008 [33/37] Linking target samples/server 00:06:15.008 [34/37] Linking target samples/null 00:06:15.008 [35/37] Linking target samples/gpio-pci-idio-16 00:06:15.008 [36/37] Linking target samples/lspci 00:06:15.008 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:15.008 INFO: autodetecting backend as ninja 00:06:15.008 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:15.008 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:15.273 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:15.273 ninja: no work to do. 00:06:20.574 The Meson build system 00:06:20.574 Version: 1.5.0 00:06:20.575 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:20.575 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:20.575 Build type: native build 00:06:20.575 Program cat found: YES (/usr/bin/cat) 00:06:20.575 Project name: DPDK 00:06:20.575 Project version: 24.03.0 00:06:20.575 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:20.575 C linker for the host machine: cc ld.bfd 2.40-14 00:06:20.575 Host machine cpu family: x86_64 00:06:20.575 Host machine cpu: x86_64 00:06:20.575 Message: ## Building in Developer Mode ## 00:06:20.575 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:20.575 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:20.575 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:20.575 Program python3 found: YES (/usr/bin/python3) 00:06:20.575 Program cat found: YES (/usr/bin/cat) 00:06:20.575 Compiler for C supports arguments -march=native: YES 00:06:20.575 Checking for size of "void *" : 8 00:06:20.575 Checking for size of "void *" : 8 (cached) 00:06:20.575 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:20.575 Library m found: YES 00:06:20.575 Library numa found: YES 00:06:20.575 Has header "numaif.h" : YES 00:06:20.575 Library fdt found: NO 00:06:20.575 Library execinfo found: NO 00:06:20.575 Has header "execinfo.h" : YES 00:06:20.575 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:20.575 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:20.575 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:20.575 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:20.575 Run-time dependency openssl found: YES 3.1.1 00:06:20.575 Run-time dependency libpcap found: YES 1.10.4 00:06:20.575 Has header "pcap.h" with dependency libpcap: YES 00:06:20.575 Compiler for C supports arguments -Wcast-qual: YES 00:06:20.575 Compiler for C supports arguments -Wdeprecated: YES 00:06:20.575 Compiler for C supports arguments -Wformat: YES 00:06:20.575 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:20.575 Compiler for C supports arguments -Wformat-security: NO 00:06:20.575 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:20.575 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:20.575 Compiler for C supports arguments -Wnested-externs: YES 00:06:20.575 Compiler for C supports arguments -Wold-style-definition: YES 00:06:20.575 Compiler for C supports arguments -Wpointer-arith: YES 00:06:20.575 Compiler for C supports arguments -Wsign-compare: YES 00:06:20.575 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:20.575 Compiler for C supports arguments -Wundef: YES 00:06:20.575 Compiler for C supports arguments -Wwrite-strings: YES 00:06:20.575 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:20.575 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:20.575 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:20.575 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:20.575 Program objdump found: YES (/usr/bin/objdump) 00:06:20.575 Compiler for C supports arguments -mavx512f: YES 00:06:20.575 Checking if "AVX512 checking" compiles: YES 00:06:20.575 Fetching value of define "__SSE4_2__" : 1 00:06:20.575 Fetching value of define "__AES__" : 1 00:06:20.575 Fetching value of define "__AVX__" : 1 00:06:20.575 Fetching value of define "__AVX2__" : 1 00:06:20.575 Fetching value of define "__AVX512BW__" : 1 00:06:20.575 Fetching value of define "__AVX512CD__" : 1 00:06:20.575 Fetching value of define "__AVX512DQ__" : 1 00:06:20.575 Fetching value of define "__AVX512F__" : 1 00:06:20.575 Fetching value of define "__AVX512VL__" : 1 00:06:20.575 Fetching value of define "__PCLMUL__" : 1 00:06:20.575 Fetching value of define "__RDRND__" : 1 00:06:20.575 Fetching value of define "__RDSEED__" : 1 00:06:20.575 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:20.575 Fetching value of define "__znver1__" : (undefined) 00:06:20.575 Fetching value of define "__znver2__" : (undefined) 00:06:20.575 Fetching value of define "__znver3__" : (undefined) 00:06:20.575 Fetching value of define "__znver4__" : (undefined) 00:06:20.575 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:20.575 Message: lib/log: Defining dependency "log" 00:06:20.575 Message: lib/kvargs: Defining dependency "kvargs" 00:06:20.575 Message: lib/telemetry: Defining dependency "telemetry" 00:06:20.575 Checking for function "getentropy" : NO 00:06:20.575 Message: lib/eal: Defining dependency "eal" 00:06:20.575 Message: lib/ring: Defining dependency "ring" 00:06:20.575 Message: lib/rcu: Defining dependency "rcu" 00:06:20.575 Message: lib/mempool: Defining dependency "mempool" 00:06:20.575 Message: lib/mbuf: Defining dependency "mbuf" 00:06:20.575 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:20.575 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:20.575 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:20.575 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:20.575 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:20.575 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:20.575 Compiler for C supports arguments -mpclmul: YES 00:06:20.575 Compiler for C supports arguments -maes: YES 00:06:20.575 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:20.575 Compiler for C supports arguments -mavx512bw: YES 00:06:20.575 Compiler for C supports arguments -mavx512dq: YES 00:06:20.575 Compiler for C supports arguments -mavx512vl: YES 00:06:20.575 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:20.575 Compiler for C supports arguments -mavx2: YES 00:06:20.575 Compiler for C supports arguments -mavx: YES 00:06:20.575 Message: lib/net: Defining dependency "net" 00:06:20.575 Message: lib/meter: Defining dependency "meter" 00:06:20.575 Message: lib/ethdev: Defining dependency "ethdev" 00:06:20.575 Message: lib/pci: Defining dependency "pci" 00:06:20.575 Message: lib/cmdline: Defining dependency "cmdline" 00:06:20.575 Message: lib/hash: Defining dependency "hash" 00:06:20.575 Message: lib/timer: Defining dependency "timer" 00:06:20.575 Message: lib/compressdev: Defining dependency "compressdev" 00:06:20.575 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:20.575 Message: lib/dmadev: Defining dependency "dmadev" 00:06:20.575 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:20.575 Message: lib/power: Defining dependency "power" 00:06:20.575 Message: lib/reorder: Defining dependency "reorder" 00:06:20.575 Message: lib/security: Defining dependency "security" 00:06:20.575 Has header "linux/userfaultfd.h" : YES 00:06:20.575 Has header "linux/vduse.h" : YES 00:06:20.575 Message: lib/vhost: Defining dependency "vhost" 00:06:20.575 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:20.575 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:20.575 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:20.575 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:20.575 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:20.575 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:20.575 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:20.575 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:20.575 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:20.575 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:20.575 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:20.575 Configuring doxy-api-html.conf using configuration 00:06:20.575 Configuring doxy-api-man.conf using configuration 00:06:20.575 Program mandb found: YES (/usr/bin/mandb) 00:06:20.575 Program sphinx-build found: NO 00:06:20.575 Configuring rte_build_config.h using configuration 00:06:20.575 Message: 00:06:20.575 ================= 00:06:20.575 Applications Enabled 00:06:20.575 ================= 00:06:20.575 00:06:20.575 apps: 00:06:20.575 00:06:20.575 00:06:20.575 Message: 00:06:20.575 ================= 00:06:20.575 Libraries Enabled 00:06:20.575 ================= 00:06:20.575 00:06:20.575 libs: 00:06:20.575 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:20.575 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:20.575 cryptodev, dmadev, power, reorder, security, vhost, 00:06:20.575 00:06:20.575 Message: 00:06:20.575 =============== 00:06:20.575 Drivers Enabled 00:06:20.575 =============== 00:06:20.575 00:06:20.575 common: 00:06:20.575 00:06:20.575 bus: 00:06:20.575 pci, vdev, 00:06:20.575 mempool: 00:06:20.575 ring, 00:06:20.575 dma: 00:06:20.575 00:06:20.575 net: 00:06:20.575 00:06:20.575 crypto: 00:06:20.575 00:06:20.575 compress: 00:06:20.575 00:06:20.575 vdpa: 00:06:20.575 00:06:20.575 00:06:20.575 Message: 00:06:20.575 ================= 00:06:20.575 Content Skipped 00:06:20.575 ================= 00:06:20.575 00:06:20.575 apps: 00:06:20.575 dumpcap: explicitly disabled via build config 00:06:20.575 graph: explicitly disabled via build config 00:06:20.575 pdump: explicitly disabled via build config 00:06:20.575 proc-info: explicitly disabled via build config 00:06:20.575 test-acl: explicitly disabled via build config 00:06:20.575 test-bbdev: explicitly disabled via build config 00:06:20.575 test-cmdline: explicitly disabled via build config 00:06:20.575 test-compress-perf: explicitly disabled via build config 00:06:20.575 test-crypto-perf: explicitly disabled via build config 00:06:20.575 test-dma-perf: explicitly disabled via build config 00:06:20.575 test-eventdev: explicitly disabled via build config 00:06:20.575 test-fib: explicitly disabled via build config 00:06:20.575 test-flow-perf: explicitly disabled via build config 00:06:20.575 test-gpudev: explicitly disabled via build config 00:06:20.575 test-mldev: explicitly disabled via build config 00:06:20.575 test-pipeline: explicitly disabled via build config 00:06:20.575 test-pmd: explicitly disabled via build config 00:06:20.575 test-regex: explicitly disabled via build config 00:06:20.575 test-sad: explicitly disabled via build config 00:06:20.575 test-security-perf: explicitly disabled via build config 00:06:20.575 00:06:20.575 libs: 00:06:20.575 argparse: explicitly disabled via build config 00:06:20.575 metrics: explicitly disabled via build config 00:06:20.575 acl: explicitly disabled via build config 00:06:20.575 bbdev: explicitly disabled via build config 00:06:20.575 bitratestats: explicitly disabled via build config 00:06:20.575 bpf: explicitly disabled via build config 00:06:20.575 cfgfile: explicitly disabled via build config 00:06:20.575 distributor: explicitly disabled via build config 00:06:20.575 efd: explicitly disabled via build config 00:06:20.576 eventdev: explicitly disabled via build config 00:06:20.576 dispatcher: explicitly disabled via build config 00:06:20.576 gpudev: explicitly disabled via build config 00:06:20.576 gro: explicitly disabled via build config 00:06:20.576 gso: explicitly disabled via build config 00:06:20.576 ip_frag: explicitly disabled via build config 00:06:20.576 jobstats: explicitly disabled via build config 00:06:20.576 latencystats: explicitly disabled via build config 00:06:20.576 lpm: explicitly disabled via build config 00:06:20.576 member: explicitly disabled via build config 00:06:20.576 pcapng: explicitly disabled via build config 00:06:20.576 rawdev: explicitly disabled via build config 00:06:20.576 regexdev: explicitly disabled via build config 00:06:20.576 mldev: explicitly disabled via build config 00:06:20.576 rib: explicitly disabled via build config 00:06:20.576 sched: explicitly disabled via build config 00:06:20.576 stack: explicitly disabled via build config 00:06:20.576 ipsec: explicitly disabled via build config 00:06:20.576 pdcp: explicitly disabled via build config 00:06:20.576 fib: explicitly disabled via build config 00:06:20.576 port: explicitly disabled via build config 00:06:20.576 pdump: explicitly disabled via build config 00:06:20.576 table: explicitly disabled via build config 00:06:20.576 pipeline: explicitly disabled via build config 00:06:20.576 graph: explicitly disabled via build config 00:06:20.576 node: explicitly disabled via build config 00:06:20.576 00:06:20.576 drivers: 00:06:20.576 common/cpt: not in enabled drivers build config 00:06:20.576 common/dpaax: not in enabled drivers build config 00:06:20.576 common/iavf: not in enabled drivers build config 00:06:20.576 common/idpf: not in enabled drivers build config 00:06:20.576 common/ionic: not in enabled drivers build config 00:06:20.576 common/mvep: not in enabled drivers build config 00:06:20.576 common/octeontx: not in enabled drivers build config 00:06:20.576 bus/auxiliary: not in enabled drivers build config 00:06:20.576 bus/cdx: not in enabled drivers build config 00:06:20.576 bus/dpaa: not in enabled drivers build config 00:06:20.576 bus/fslmc: not in enabled drivers build config 00:06:20.576 bus/ifpga: not in enabled drivers build config 00:06:20.576 bus/platform: not in enabled drivers build config 00:06:20.576 bus/uacce: not in enabled drivers build config 00:06:20.576 bus/vmbus: not in enabled drivers build config 00:06:20.576 common/cnxk: not in enabled drivers build config 00:06:20.576 common/mlx5: not in enabled drivers build config 00:06:20.576 common/nfp: not in enabled drivers build config 00:06:20.576 common/nitrox: not in enabled drivers build config 00:06:20.576 common/qat: not in enabled drivers build config 00:06:20.576 common/sfc_efx: not in enabled drivers build config 00:06:20.576 mempool/bucket: not in enabled drivers build config 00:06:20.576 mempool/cnxk: not in enabled drivers build config 00:06:20.576 mempool/dpaa: not in enabled drivers build config 00:06:20.576 mempool/dpaa2: not in enabled drivers build config 00:06:20.576 mempool/octeontx: not in enabled drivers build config 00:06:20.576 mempool/stack: not in enabled drivers build config 00:06:20.576 dma/cnxk: not in enabled drivers build config 00:06:20.576 dma/dpaa: not in enabled drivers build config 00:06:20.576 dma/dpaa2: not in enabled drivers build config 00:06:20.576 dma/hisilicon: not in enabled drivers build config 00:06:20.576 dma/idxd: not in enabled drivers build config 00:06:20.576 dma/ioat: not in enabled drivers build config 00:06:20.576 dma/skeleton: not in enabled drivers build config 00:06:20.576 net/af_packet: not in enabled drivers build config 00:06:20.576 net/af_xdp: not in enabled drivers build config 00:06:20.576 net/ark: not in enabled drivers build config 00:06:20.576 net/atlantic: not in enabled drivers build config 00:06:20.576 net/avp: not in enabled drivers build config 00:06:20.576 net/axgbe: not in enabled drivers build config 00:06:20.576 net/bnx2x: not in enabled drivers build config 00:06:20.576 net/bnxt: not in enabled drivers build config 00:06:20.576 net/bonding: not in enabled drivers build config 00:06:20.576 net/cnxk: not in enabled drivers build config 00:06:20.576 net/cpfl: not in enabled drivers build config 00:06:20.576 net/cxgbe: not in enabled drivers build config 00:06:20.576 net/dpaa: not in enabled drivers build config 00:06:20.576 net/dpaa2: not in enabled drivers build config 00:06:20.576 net/e1000: not in enabled drivers build config 00:06:20.576 net/ena: not in enabled drivers build config 00:06:20.576 net/enetc: not in enabled drivers build config 00:06:20.576 net/enetfec: not in enabled drivers build config 00:06:20.576 net/enic: not in enabled drivers build config 00:06:20.576 net/failsafe: not in enabled drivers build config 00:06:20.576 net/fm10k: not in enabled drivers build config 00:06:20.576 net/gve: not in enabled drivers build config 00:06:20.576 net/hinic: not in enabled drivers build config 00:06:20.576 net/hns3: not in enabled drivers build config 00:06:20.576 net/i40e: not in enabled drivers build config 00:06:20.576 net/iavf: not in enabled drivers build config 00:06:20.576 net/ice: not in enabled drivers build config 00:06:20.576 net/idpf: not in enabled drivers build config 00:06:20.576 net/igc: not in enabled drivers build config 00:06:20.576 net/ionic: not in enabled drivers build config 00:06:20.576 net/ipn3ke: not in enabled drivers build config 00:06:20.576 net/ixgbe: not in enabled drivers build config 00:06:20.576 net/mana: not in enabled drivers build config 00:06:20.576 net/memif: not in enabled drivers build config 00:06:20.576 net/mlx4: not in enabled drivers build config 00:06:20.576 net/mlx5: not in enabled drivers build config 00:06:20.576 net/mvneta: not in enabled drivers build config 00:06:20.576 net/mvpp2: not in enabled drivers build config 00:06:20.576 net/netvsc: not in enabled drivers build config 00:06:20.576 net/nfb: not in enabled drivers build config 00:06:20.576 net/nfp: not in enabled drivers build config 00:06:20.576 net/ngbe: not in enabled drivers build config 00:06:20.576 net/null: not in enabled drivers build config 00:06:20.576 net/octeontx: not in enabled drivers build config 00:06:20.576 net/octeon_ep: not in enabled drivers build config 00:06:20.576 net/pcap: not in enabled drivers build config 00:06:20.576 net/pfe: not in enabled drivers build config 00:06:20.576 net/qede: not in enabled drivers build config 00:06:20.576 net/ring: not in enabled drivers build config 00:06:20.576 net/sfc: not in enabled drivers build config 00:06:20.576 net/softnic: not in enabled drivers build config 00:06:20.576 net/tap: not in enabled drivers build config 00:06:20.576 net/thunderx: not in enabled drivers build config 00:06:20.576 net/txgbe: not in enabled drivers build config 00:06:20.576 net/vdev_netvsc: not in enabled drivers build config 00:06:20.576 net/vhost: not in enabled drivers build config 00:06:20.576 net/virtio: not in enabled drivers build config 00:06:20.576 net/vmxnet3: not in enabled drivers build config 00:06:20.576 raw/*: missing internal dependency, "rawdev" 00:06:20.576 crypto/armv8: not in enabled drivers build config 00:06:20.576 crypto/bcmfs: not in enabled drivers build config 00:06:20.576 crypto/caam_jr: not in enabled drivers build config 00:06:20.576 crypto/ccp: not in enabled drivers build config 00:06:20.576 crypto/cnxk: not in enabled drivers build config 00:06:20.576 crypto/dpaa_sec: not in enabled drivers build config 00:06:20.576 crypto/dpaa2_sec: not in enabled drivers build config 00:06:20.576 crypto/ipsec_mb: not in enabled drivers build config 00:06:20.576 crypto/mlx5: not in enabled drivers build config 00:06:20.576 crypto/mvsam: not in enabled drivers build config 00:06:20.576 crypto/nitrox: not in enabled drivers build config 00:06:20.576 crypto/null: not in enabled drivers build config 00:06:20.576 crypto/octeontx: not in enabled drivers build config 00:06:20.576 crypto/openssl: not in enabled drivers build config 00:06:20.576 crypto/scheduler: not in enabled drivers build config 00:06:20.576 crypto/uadk: not in enabled drivers build config 00:06:20.576 crypto/virtio: not in enabled drivers build config 00:06:20.576 compress/isal: not in enabled drivers build config 00:06:20.576 compress/mlx5: not in enabled drivers build config 00:06:20.576 compress/nitrox: not in enabled drivers build config 00:06:20.576 compress/octeontx: not in enabled drivers build config 00:06:20.576 compress/zlib: not in enabled drivers build config 00:06:20.576 regex/*: missing internal dependency, "regexdev" 00:06:20.576 ml/*: missing internal dependency, "mldev" 00:06:20.576 vdpa/ifc: not in enabled drivers build config 00:06:20.576 vdpa/mlx5: not in enabled drivers build config 00:06:20.576 vdpa/nfp: not in enabled drivers build config 00:06:20.576 vdpa/sfc: not in enabled drivers build config 00:06:20.576 event/*: missing internal dependency, "eventdev" 00:06:20.576 baseband/*: missing internal dependency, "bbdev" 00:06:20.576 gpu/*: missing internal dependency, "gpudev" 00:06:20.576 00:06:20.576 00:06:20.576 Build targets in project: 84 00:06:20.576 00:06:20.576 DPDK 24.03.0 00:06:20.576 00:06:20.576 User defined options 00:06:20.576 buildtype : debug 00:06:20.576 default_library : shared 00:06:20.576 libdir : lib 00:06:20.576 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:20.576 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:20.576 c_link_args : 00:06:20.576 cpu_instruction_set: native 00:06:20.576 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:20.576 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:20.576 enable_docs : false 00:06:20.576 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:20.576 enable_kmods : false 00:06:20.576 max_lcores : 128 00:06:20.576 tests : false 00:06:20.576 00:06:20.576 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:20.843 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:21.116 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:21.116 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:21.116 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:21.116 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:21.116 [5/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:21.116 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:21.116 [7/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:21.116 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:21.117 [9/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:21.117 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:21.117 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:21.117 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:21.117 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:21.117 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:21.117 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:21.117 [16/267] Linking static target lib/librte_kvargs.a 00:06:21.117 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:21.117 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:21.117 [19/267] Linking static target lib/librte_log.a 00:06:21.117 [20/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:21.117 [21/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:21.117 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:21.117 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:21.117 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:21.117 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:21.117 [26/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:21.117 [27/267] Linking static target lib/librte_pci.a 00:06:21.376 [28/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:21.376 [29/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:21.376 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:21.376 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:21.376 [32/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:21.376 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:21.376 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:21.376 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:21.376 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:21.376 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:21.376 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:21.376 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:21.638 [40/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:21.638 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:21.638 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:21.638 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:21.638 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.638 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:21.638 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:21.638 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:21.638 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:21.638 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:21.638 [50/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:21.638 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:21.638 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:21.638 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:21.638 [54/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.638 [55/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:21.638 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:21.638 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:21.638 [58/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:21.638 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:21.638 [60/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:21.638 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:21.638 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:21.638 [63/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:21.638 [64/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:21.638 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:21.638 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:21.638 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:21.638 [68/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:21.638 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:21.638 [70/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:21.638 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:21.638 [72/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:21.638 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:21.638 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:21.638 [75/267] Linking static target lib/librte_ring.a 00:06:21.638 [76/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:21.638 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:21.638 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:21.638 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:21.638 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:21.638 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:21.638 [82/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:21.638 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:21.638 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:21.638 [85/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:21.638 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:21.638 [87/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:21.638 [88/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:21.638 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:21.638 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:21.638 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:21.638 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:21.638 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:21.638 [94/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:21.638 [95/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:21.638 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:21.638 [97/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:21.638 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:21.638 [99/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:21.638 [100/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:21.638 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:21.638 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:21.638 [103/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:21.638 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:21.638 [105/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:21.638 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:21.638 [107/267] Linking static target lib/librte_telemetry.a 00:06:21.638 [108/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:21.638 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:21.638 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:21.638 [111/267] Linking static target lib/librte_meter.a 00:06:21.639 [112/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:21.639 [113/267] Linking static target lib/librte_rcu.a 00:06:21.639 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:21.639 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:21.639 [116/267] Linking static target lib/librte_net.a 00:06:21.639 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:21.639 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:21.639 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:21.639 [120/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:21.639 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:21.639 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:21.639 [123/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:21.639 [124/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:21.639 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:21.639 [126/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:21.639 [127/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:21.639 [128/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:21.639 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:21.901 [130/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:21.901 [131/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:21.901 [132/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:21.901 [133/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:21.901 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:21.901 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:21.901 [136/267] Linking static target lib/librte_cmdline.a 00:06:21.901 [137/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:21.901 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:21.901 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:21.901 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:21.901 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:21.901 [142/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:21.901 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:21.901 [144/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:21.901 [145/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:21.901 [146/267] Linking static target lib/librte_dmadev.a 00:06:21.901 [147/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.901 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:21.901 [149/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:21.901 [150/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:21.901 [151/267] Linking static target lib/librte_reorder.a 00:06:21.901 [152/267] Linking static target lib/librte_timer.a 00:06:21.901 [153/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:21.901 [154/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:21.901 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:21.901 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:21.901 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:21.901 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:21.901 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:21.901 [160/267] Linking target lib/librte_log.so.24.1 00:06:21.901 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:21.901 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:21.901 [163/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:21.901 [164/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:21.901 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:21.901 [166/267] Linking static target lib/librte_security.a 00:06:21.901 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:21.901 [168/267] Linking static target lib/librte_mempool.a 00:06:21.901 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:21.901 [170/267] Linking static target lib/librte_power.a 00:06:21.901 [171/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:21.901 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:21.901 [173/267] Linking static target lib/librte_compressdev.a 00:06:21.901 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:21.901 [175/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:21.901 [176/267] Linking static target lib/librte_mbuf.a 00:06:21.901 [177/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:21.901 [178/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:21.901 [179/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:21.901 [180/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:21.901 [181/267] Linking static target lib/librte_eal.a 00:06:21.901 [182/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.901 [183/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:21.901 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:21.901 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:21.901 [186/267] Linking static target lib/librte_hash.a 00:06:22.164 [187/267] Linking target lib/librte_kvargs.so.24.1 00:06:22.164 [188/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.164 [189/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:22.164 [190/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.164 [191/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:22.164 [192/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:22.164 [193/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:22.164 [194/267] Linking static target drivers/librte_mempool_ring.a 00:06:22.164 [195/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:22.164 [196/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.164 [197/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.164 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:22.164 [199/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.164 [200/267] Linking static target lib/librte_cryptodev.a 00:06:22.164 [201/267] Linking static target drivers/librte_bus_vdev.a 00:06:22.164 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:22.164 [203/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:22.164 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:22.164 [205/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.164 [206/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.164 [207/267] Linking static target drivers/librte_bus_pci.a 00:06:22.426 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:22.426 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.426 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.426 [211/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.426 [212/267] Linking target lib/librte_telemetry.so.24.1 00:06:22.426 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.426 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.688 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:22.688 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.688 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:22.688 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:22.688 [219/267] Linking static target lib/librte_ethdev.a 00:06:22.688 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.951 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.951 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.951 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.951 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.213 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.213 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.786 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:23.786 [228/267] Linking static target lib/librte_vhost.a 00:06:24.360 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.752 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.344 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.287 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.287 [233/267] Linking target lib/librte_eal.so.24.1 00:06:33.549 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:33.549 [235/267] Linking target lib/librte_ring.so.24.1 00:06:33.549 [236/267] Linking target lib/librte_meter.so.24.1 00:06:33.549 [237/267] Linking target lib/librte_pci.so.24.1 00:06:33.549 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:33.549 [239/267] Linking target lib/librte_timer.so.24.1 00:06:33.549 [240/267] Linking target lib/librte_dmadev.so.24.1 00:06:33.549 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:33.549 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:33.549 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:33.549 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:33.549 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:33.549 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:33.810 [247/267] Linking target lib/librte_rcu.so.24.1 00:06:33.810 [248/267] Linking target lib/librte_mempool.so.24.1 00:06:33.810 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:33.810 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:33.810 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:33.810 [252/267] Linking target lib/librte_mbuf.so.24.1 00:06:34.071 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:34.071 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:06:34.071 [255/267] Linking target lib/librte_net.so.24.1 00:06:34.071 [256/267] Linking target lib/librte_compressdev.so.24.1 00:06:34.071 [257/267] Linking target lib/librte_reorder.so.24.1 00:06:34.071 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:34.071 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:34.332 [260/267] Linking target lib/librte_hash.so.24.1 00:06:34.332 [261/267] Linking target lib/librte_security.so.24.1 00:06:34.332 [262/267] Linking target lib/librte_cmdline.so.24.1 00:06:34.332 [263/267] Linking target lib/librte_ethdev.so.24.1 00:06:34.332 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:34.332 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:34.332 [266/267] Linking target lib/librte_power.so.24.1 00:06:34.593 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:34.593 INFO: autodetecting backend as ninja 00:06:34.593 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:36.513 CC lib/log/log.o 00:06:36.513 CC lib/log/log_flags.o 00:06:36.513 CC lib/log/log_deprecated.o 00:06:36.513 CC lib/ut/ut.o 00:06:36.513 CC lib/ut_mock/mock.o 00:06:36.777 LIB libspdk_ut.a 00:06:36.777 LIB libspdk_log.a 00:06:36.777 LIB libspdk_ut_mock.a 00:06:36.777 SO libspdk_ut.so.2.0 00:06:36.777 SO libspdk_ut_mock.so.6.0 00:06:36.777 SO libspdk_log.so.7.0 00:06:36.777 SYMLINK libspdk_ut_mock.so 00:06:36.777 SYMLINK libspdk_ut.so 00:06:36.777 SYMLINK libspdk_log.so 00:06:37.351 CC lib/dma/dma.o 00:06:37.351 CXX lib/trace_parser/trace.o 00:06:37.351 CC lib/util/base64.o 00:06:37.351 CC lib/util/bit_array.o 00:06:37.351 CC lib/ioat/ioat.o 00:06:37.351 CC lib/util/cpuset.o 00:06:37.351 CC lib/util/crc16.o 00:06:37.351 CC lib/util/crc32.o 00:06:37.351 CC lib/util/crc32c.o 00:06:37.351 CC lib/util/crc32_ieee.o 00:06:37.351 CC lib/util/crc64.o 00:06:37.351 CC lib/util/dif.o 00:06:37.351 CC lib/util/fd.o 00:06:37.351 CC lib/util/fd_group.o 00:06:37.351 CC lib/util/file.o 00:06:37.351 CC lib/util/hexlify.o 00:06:37.351 CC lib/util/iov.o 00:06:37.351 CC lib/util/math.o 00:06:37.351 CC lib/util/net.o 00:06:37.351 CC lib/util/pipe.o 00:06:37.351 CC lib/util/strerror_tls.o 00:06:37.351 CC lib/util/string.o 00:06:37.351 CC lib/util/uuid.o 00:06:37.351 CC lib/util/xor.o 00:06:37.351 CC lib/util/zipf.o 00:06:37.351 CC lib/util/md5.o 00:06:37.351 CC lib/vfio_user/host/vfio_user.o 00:06:37.351 CC lib/vfio_user/host/vfio_user_pci.o 00:06:37.351 LIB libspdk_dma.a 00:06:37.613 SO libspdk_dma.so.5.0 00:06:37.613 SYMLINK libspdk_dma.so 00:06:37.613 LIB libspdk_ioat.a 00:06:37.613 SO libspdk_ioat.so.7.0 00:06:37.613 SYMLINK libspdk_ioat.so 00:06:37.613 LIB libspdk_vfio_user.a 00:06:37.613 SO libspdk_vfio_user.so.5.0 00:06:37.613 SYMLINK libspdk_vfio_user.so 00:06:37.875 LIB libspdk_util.a 00:06:37.875 SO libspdk_util.so.10.1 00:06:37.875 SYMLINK libspdk_util.so 00:06:38.447 CC lib/rdma_provider/common.o 00:06:38.447 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:38.447 CC lib/json/json_parse.o 00:06:38.447 CC lib/vmd/vmd.o 00:06:38.447 CC lib/vmd/led.o 00:06:38.447 CC lib/json/json_util.o 00:06:38.447 CC lib/rdma_utils/rdma_utils.o 00:06:38.447 CC lib/json/json_write.o 00:06:38.447 CC lib/conf/conf.o 00:06:38.447 CC lib/env_dpdk/env.o 00:06:38.447 CC lib/env_dpdk/memory.o 00:06:38.447 CC lib/idxd/idxd.o 00:06:38.447 CC lib/env_dpdk/pci.o 00:06:38.447 CC lib/idxd/idxd_user.o 00:06:38.447 CC lib/env_dpdk/init.o 00:06:38.447 CC lib/idxd/idxd_kernel.o 00:06:38.447 CC lib/env_dpdk/threads.o 00:06:38.447 CC lib/env_dpdk/pci_ioat.o 00:06:38.447 CC lib/env_dpdk/pci_virtio.o 00:06:38.447 CC lib/env_dpdk/pci_vmd.o 00:06:38.447 CC lib/env_dpdk/pci_idxd.o 00:06:38.447 CC lib/env_dpdk/pci_event.o 00:06:38.447 CC lib/env_dpdk/sigbus_handler.o 00:06:38.447 CC lib/env_dpdk/pci_dpdk.o 00:06:38.447 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:38.447 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:38.447 LIB libspdk_rdma_provider.a 00:06:38.447 SO libspdk_rdma_provider.so.6.0 00:06:38.447 LIB libspdk_conf.a 00:06:38.709 SO libspdk_conf.so.6.0 00:06:38.709 LIB libspdk_json.a 00:06:38.709 LIB libspdk_rdma_utils.a 00:06:38.709 SYMLINK libspdk_rdma_provider.so 00:06:38.709 SYMLINK libspdk_conf.so 00:06:38.709 SO libspdk_json.so.6.0 00:06:38.709 SO libspdk_rdma_utils.so.1.0 00:06:38.709 SYMLINK libspdk_rdma_utils.so 00:06:38.709 SYMLINK libspdk_json.so 00:06:38.971 LIB libspdk_idxd.a 00:06:38.971 LIB libspdk_vmd.a 00:06:38.971 SO libspdk_idxd.so.12.1 00:06:38.971 LIB libspdk_trace_parser.a 00:06:38.971 SO libspdk_vmd.so.6.0 00:06:38.971 SO libspdk_trace_parser.so.6.0 00:06:38.971 SYMLINK libspdk_idxd.so 00:06:38.971 SYMLINK libspdk_vmd.so 00:06:38.971 SYMLINK libspdk_trace_parser.so 00:06:38.971 CC lib/jsonrpc/jsonrpc_server.o 00:06:38.971 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:38.971 CC lib/jsonrpc/jsonrpc_client.o 00:06:38.971 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:39.233 LIB libspdk_jsonrpc.a 00:06:39.496 SO libspdk_jsonrpc.so.6.0 00:06:39.496 SYMLINK libspdk_jsonrpc.so 00:06:39.496 LIB libspdk_env_dpdk.a 00:06:39.758 SO libspdk_env_dpdk.so.15.1 00:06:39.758 SYMLINK libspdk_env_dpdk.so 00:06:39.758 CC lib/rpc/rpc.o 00:06:40.020 LIB libspdk_rpc.a 00:06:40.020 SO libspdk_rpc.so.6.0 00:06:40.283 SYMLINK libspdk_rpc.so 00:06:40.545 CC lib/trace/trace.o 00:06:40.545 CC lib/keyring/keyring.o 00:06:40.545 CC lib/trace/trace_flags.o 00:06:40.545 CC lib/keyring/keyring_rpc.o 00:06:40.545 CC lib/trace/trace_rpc.o 00:06:40.545 CC lib/notify/notify.o 00:06:40.545 CC lib/notify/notify_rpc.o 00:06:40.807 LIB libspdk_notify.a 00:06:40.807 SO libspdk_notify.so.6.0 00:06:40.807 LIB libspdk_keyring.a 00:06:40.807 LIB libspdk_trace.a 00:06:40.807 SO libspdk_keyring.so.2.0 00:06:40.807 SYMLINK libspdk_notify.so 00:06:40.807 SO libspdk_trace.so.11.0 00:06:40.807 SYMLINK libspdk_keyring.so 00:06:40.807 SYMLINK libspdk_trace.so 00:06:41.381 CC lib/thread/thread.o 00:06:41.381 CC lib/thread/iobuf.o 00:06:41.381 CC lib/sock/sock.o 00:06:41.381 CC lib/sock/sock_rpc.o 00:06:41.642 LIB libspdk_sock.a 00:06:41.642 SO libspdk_sock.so.10.0 00:06:41.642 SYMLINK libspdk_sock.so 00:06:42.217 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:42.217 CC lib/nvme/nvme_ctrlr.o 00:06:42.217 CC lib/nvme/nvme_fabric.o 00:06:42.217 CC lib/nvme/nvme_ns_cmd.o 00:06:42.217 CC lib/nvme/nvme_ns.o 00:06:42.217 CC lib/nvme/nvme_pcie_common.o 00:06:42.217 CC lib/nvme/nvme_pcie.o 00:06:42.217 CC lib/nvme/nvme_qpair.o 00:06:42.217 CC lib/nvme/nvme.o 00:06:42.217 CC lib/nvme/nvme_quirks.o 00:06:42.217 CC lib/nvme/nvme_transport.o 00:06:42.217 CC lib/nvme/nvme_discovery.o 00:06:42.217 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:42.217 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:42.217 CC lib/nvme/nvme_tcp.o 00:06:42.217 CC lib/nvme/nvme_opal.o 00:06:42.217 CC lib/nvme/nvme_io_msg.o 00:06:42.217 CC lib/nvme/nvme_poll_group.o 00:06:42.217 CC lib/nvme/nvme_zns.o 00:06:42.217 CC lib/nvme/nvme_stubs.o 00:06:42.217 CC lib/nvme/nvme_auth.o 00:06:42.217 CC lib/nvme/nvme_cuse.o 00:06:42.217 CC lib/nvme/nvme_vfio_user.o 00:06:42.217 CC lib/nvme/nvme_rdma.o 00:06:42.480 LIB libspdk_thread.a 00:06:42.480 SO libspdk_thread.so.10.2 00:06:42.742 SYMLINK libspdk_thread.so 00:06:43.005 CC lib/blob/blobstore.o 00:06:43.005 CC lib/blob/request.o 00:06:43.005 CC lib/blob/zeroes.o 00:06:43.005 CC lib/blob/blob_bs_dev.o 00:06:43.005 CC lib/fsdev/fsdev.o 00:06:43.005 CC lib/fsdev/fsdev_io.o 00:06:43.005 CC lib/fsdev/fsdev_rpc.o 00:06:43.005 CC lib/accel/accel.o 00:06:43.005 CC lib/accel/accel_rpc.o 00:06:43.005 CC lib/accel/accel_sw.o 00:06:43.005 CC lib/virtio/virtio.o 00:06:43.005 CC lib/vfu_tgt/tgt_endpoint.o 00:06:43.005 CC lib/virtio/virtio_vhost_user.o 00:06:43.005 CC lib/vfu_tgt/tgt_rpc.o 00:06:43.005 CC lib/init/json_config.o 00:06:43.005 CC lib/virtio/virtio_vfio_user.o 00:06:43.005 CC lib/init/subsystem.o 00:06:43.005 CC lib/virtio/virtio_pci.o 00:06:43.005 CC lib/init/subsystem_rpc.o 00:06:43.005 CC lib/init/rpc.o 00:06:43.266 LIB libspdk_init.a 00:06:43.266 SO libspdk_init.so.6.0 00:06:43.529 LIB libspdk_virtio.a 00:06:43.529 LIB libspdk_vfu_tgt.a 00:06:43.529 SO libspdk_virtio.so.7.0 00:06:43.529 SO libspdk_vfu_tgt.so.3.0 00:06:43.529 SYMLINK libspdk_init.so 00:06:43.529 SYMLINK libspdk_vfu_tgt.so 00:06:43.529 SYMLINK libspdk_virtio.so 00:06:43.791 LIB libspdk_fsdev.a 00:06:43.791 SO libspdk_fsdev.so.1.0 00:06:43.791 SYMLINK libspdk_fsdev.so 00:06:43.791 CC lib/event/app.o 00:06:43.791 CC lib/event/reactor.o 00:06:43.791 CC lib/event/log_rpc.o 00:06:43.791 CC lib/event/app_rpc.o 00:06:43.791 CC lib/event/scheduler_static.o 00:06:44.053 LIB libspdk_accel.a 00:06:44.053 LIB libspdk_nvme.a 00:06:44.053 SO libspdk_accel.so.16.0 00:06:44.053 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:44.053 SYMLINK libspdk_accel.so 00:06:44.053 SO libspdk_nvme.so.15.0 00:06:44.315 LIB libspdk_event.a 00:06:44.315 SO libspdk_event.so.15.0 00:06:44.315 SYMLINK libspdk_event.so 00:06:44.315 SYMLINK libspdk_nvme.so 00:06:44.577 CC lib/bdev/bdev.o 00:06:44.577 CC lib/bdev/bdev_rpc.o 00:06:44.577 CC lib/bdev/bdev_zone.o 00:06:44.577 CC lib/bdev/part.o 00:06:44.577 CC lib/bdev/scsi_nvme.o 00:06:44.577 LIB libspdk_fuse_dispatcher.a 00:06:44.839 SO libspdk_fuse_dispatcher.so.1.0 00:06:44.839 SYMLINK libspdk_fuse_dispatcher.so 00:06:45.782 LIB libspdk_blob.a 00:06:45.782 SO libspdk_blob.so.11.0 00:06:45.782 SYMLINK libspdk_blob.so 00:06:46.044 CC lib/blobfs/blobfs.o 00:06:46.044 CC lib/lvol/lvol.o 00:06:46.044 CC lib/blobfs/tree.o 00:06:46.987 LIB libspdk_bdev.a 00:06:46.987 SO libspdk_bdev.so.17.0 00:06:46.987 LIB libspdk_blobfs.a 00:06:46.987 SYMLINK libspdk_bdev.so 00:06:46.987 SO libspdk_blobfs.so.10.0 00:06:46.987 LIB libspdk_lvol.a 00:06:46.987 SYMLINK libspdk_blobfs.so 00:06:46.987 SO libspdk_lvol.so.10.0 00:06:46.987 SYMLINK libspdk_lvol.so 00:06:47.256 CC lib/nvmf/ctrlr.o 00:06:47.256 CC lib/scsi/dev.o 00:06:47.256 CC lib/nvmf/ctrlr_bdev.o 00:06:47.256 CC lib/ftl/ftl_core.o 00:06:47.256 CC lib/nvmf/ctrlr_discovery.o 00:06:47.256 CC lib/ftl/ftl_init.o 00:06:47.256 CC lib/scsi/lun.o 00:06:47.256 CC lib/nvmf/subsystem.o 00:06:47.256 CC lib/scsi/port.o 00:06:47.256 CC lib/ftl/ftl_layout.o 00:06:47.256 CC lib/nvmf/nvmf.o 00:06:47.256 CC lib/scsi/scsi.o 00:06:47.256 CC lib/ftl/ftl_debug.o 00:06:47.256 CC lib/nbd/nbd.o 00:06:47.256 CC lib/nvmf/nvmf_rpc.o 00:06:47.256 CC lib/scsi/scsi_bdev.o 00:06:47.256 CC lib/nvmf/transport.o 00:06:47.256 CC lib/ftl/ftl_io.o 00:06:47.256 CC lib/nbd/nbd_rpc.o 00:06:47.256 CC lib/scsi/scsi_pr.o 00:06:47.256 CC lib/nvmf/tcp.o 00:06:47.256 CC lib/ftl/ftl_sb.o 00:06:47.256 CC lib/scsi/scsi_rpc.o 00:06:47.256 CC lib/nvmf/stubs.o 00:06:47.256 CC lib/ftl/ftl_l2p.o 00:06:47.256 CC lib/ublk/ublk.o 00:06:47.256 CC lib/scsi/task.o 00:06:47.256 CC lib/nvmf/mdns_server.o 00:06:47.256 CC lib/ftl/ftl_l2p_flat.o 00:06:47.256 CC lib/ftl/ftl_nv_cache.o 00:06:47.256 CC lib/nvmf/vfio_user.o 00:06:47.256 CC lib/ublk/ublk_rpc.o 00:06:47.256 CC lib/ftl/ftl_band.o 00:06:47.256 CC lib/ftl/ftl_band_ops.o 00:06:47.256 CC lib/nvmf/rdma.o 00:06:47.256 CC lib/nvmf/auth.o 00:06:47.256 CC lib/ftl/ftl_writer.o 00:06:47.256 CC lib/ftl/ftl_rq.o 00:06:47.256 CC lib/ftl/ftl_l2p_cache.o 00:06:47.256 CC lib/ftl/ftl_reloc.o 00:06:47.256 CC lib/ftl/ftl_p2l.o 00:06:47.256 CC lib/ftl/ftl_p2l_log.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:47.256 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:47.256 CC lib/ftl/utils/ftl_conf.o 00:06:47.256 CC lib/ftl/utils/ftl_md.o 00:06:47.256 CC lib/ftl/utils/ftl_mempool.o 00:06:47.256 CC lib/ftl/utils/ftl_bitmap.o 00:06:47.256 CC lib/ftl/utils/ftl_property.o 00:06:47.256 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:47.256 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:47.256 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:47.256 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:47.256 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:47.256 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:47.256 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:47.256 CC lib/ftl/ftl_trace.o 00:06:47.256 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:47.256 CC lib/ftl/base/ftl_base_bdev.o 00:06:47.518 CC lib/ftl/base/ftl_base_dev.o 00:06:48.091 LIB libspdk_nbd.a 00:06:48.091 SO libspdk_nbd.so.7.0 00:06:48.091 SYMLINK libspdk_nbd.so 00:06:48.091 LIB libspdk_scsi.a 00:06:48.352 LIB libspdk_ublk.a 00:06:48.352 SO libspdk_scsi.so.9.0 00:06:48.352 SO libspdk_ublk.so.3.0 00:06:48.352 SYMLINK libspdk_scsi.so 00:06:48.352 SYMLINK libspdk_ublk.so 00:06:48.614 LIB libspdk_ftl.a 00:06:48.614 CC lib/vhost/vhost.o 00:06:48.614 CC lib/vhost/vhost_rpc.o 00:06:48.614 CC lib/vhost/vhost_scsi.o 00:06:48.614 CC lib/vhost/vhost_blk.o 00:06:48.614 CC lib/vhost/rte_vhost_user.o 00:06:48.614 CC lib/iscsi/conn.o 00:06:48.614 CC lib/iscsi/init_grp.o 00:06:48.614 CC lib/iscsi/iscsi.o 00:06:48.614 CC lib/iscsi/param.o 00:06:48.614 CC lib/iscsi/portal_grp.o 00:06:48.614 CC lib/iscsi/tgt_node.o 00:06:48.614 CC lib/iscsi/iscsi_subsystem.o 00:06:48.614 CC lib/iscsi/iscsi_rpc.o 00:06:48.614 CC lib/iscsi/task.o 00:06:48.614 SO libspdk_ftl.so.9.0 00:06:49.185 SYMLINK libspdk_ftl.so 00:06:49.446 LIB libspdk_nvmf.a 00:06:49.446 SO libspdk_nvmf.so.19.0 00:06:49.707 LIB libspdk_vhost.a 00:06:49.707 SO libspdk_vhost.so.8.0 00:06:49.707 SYMLINK libspdk_nvmf.so 00:06:49.707 SYMLINK libspdk_vhost.so 00:06:49.968 LIB libspdk_iscsi.a 00:06:49.968 SO libspdk_iscsi.so.8.0 00:06:50.230 SYMLINK libspdk_iscsi.so 00:06:50.804 CC module/env_dpdk/env_dpdk_rpc.o 00:06:50.804 CC module/vfu_device/vfu_virtio.o 00:06:50.804 CC module/vfu_device/vfu_virtio_blk.o 00:06:50.804 CC module/vfu_device/vfu_virtio_scsi.o 00:06:50.804 CC module/vfu_device/vfu_virtio_rpc.o 00:06:50.804 CC module/vfu_device/vfu_virtio_fs.o 00:06:50.804 LIB libspdk_env_dpdk_rpc.a 00:06:50.804 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:50.804 CC module/accel/error/accel_error_rpc.o 00:06:50.804 CC module/accel/error/accel_error.o 00:06:50.804 CC module/blob/bdev/blob_bdev.o 00:06:50.804 CC module/accel/ioat/accel_ioat.o 00:06:50.804 CC module/accel/ioat/accel_ioat_rpc.o 00:06:50.804 CC module/keyring/file/keyring.o 00:06:50.804 CC module/keyring/file/keyring_rpc.o 00:06:50.804 CC module/accel/iaa/accel_iaa.o 00:06:50.804 CC module/accel/iaa/accel_iaa_rpc.o 00:06:50.804 CC module/scheduler/gscheduler/gscheduler.o 00:06:50.804 CC module/sock/posix/posix.o 00:06:51.067 CC module/accel/dsa/accel_dsa.o 00:06:51.067 CC module/keyring/linux/keyring.o 00:06:51.067 CC module/accel/dsa/accel_dsa_rpc.o 00:06:51.067 CC module/keyring/linux/keyring_rpc.o 00:06:51.067 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:51.067 CC module/fsdev/aio/fsdev_aio.o 00:06:51.067 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:51.067 CC module/fsdev/aio/linux_aio_mgr.o 00:06:51.067 SO libspdk_env_dpdk_rpc.so.6.0 00:06:51.067 SYMLINK libspdk_env_dpdk_rpc.so 00:06:51.067 LIB libspdk_keyring_file.a 00:06:51.067 LIB libspdk_keyring_linux.a 00:06:51.067 LIB libspdk_accel_error.a 00:06:51.067 LIB libspdk_scheduler_dynamic.a 00:06:51.067 LIB libspdk_scheduler_gscheduler.a 00:06:51.067 LIB libspdk_scheduler_dpdk_governor.a 00:06:51.067 LIB libspdk_accel_ioat.a 00:06:51.067 SO libspdk_keyring_file.so.2.0 00:06:51.067 SO libspdk_keyring_linux.so.1.0 00:06:51.067 LIB libspdk_accel_iaa.a 00:06:51.067 SO libspdk_accel_error.so.2.0 00:06:51.067 SO libspdk_scheduler_dynamic.so.4.0 00:06:51.067 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:51.067 SO libspdk_scheduler_gscheduler.so.4.0 00:06:51.329 SO libspdk_accel_ioat.so.6.0 00:06:51.329 SO libspdk_accel_iaa.so.3.0 00:06:51.329 SYMLINK libspdk_keyring_file.so 00:06:51.329 SYMLINK libspdk_keyring_linux.so 00:06:51.329 LIB libspdk_blob_bdev.a 00:06:51.329 SYMLINK libspdk_scheduler_gscheduler.so 00:06:51.329 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:51.329 SYMLINK libspdk_scheduler_dynamic.so 00:06:51.329 SYMLINK libspdk_accel_error.so 00:06:51.329 LIB libspdk_accel_dsa.a 00:06:51.329 SYMLINK libspdk_accel_ioat.so 00:06:51.329 SO libspdk_blob_bdev.so.11.0 00:06:51.329 SYMLINK libspdk_accel_iaa.so 00:06:51.329 SO libspdk_accel_dsa.so.5.0 00:06:51.329 LIB libspdk_vfu_device.a 00:06:51.329 SYMLINK libspdk_blob_bdev.so 00:06:51.329 SYMLINK libspdk_accel_dsa.so 00:06:51.329 SO libspdk_vfu_device.so.3.0 00:06:51.591 SYMLINK libspdk_vfu_device.so 00:06:51.591 LIB libspdk_sock_posix.a 00:06:51.591 SO libspdk_sock_posix.so.6.0 00:06:51.591 LIB libspdk_fsdev_aio.a 00:06:51.591 SO libspdk_fsdev_aio.so.1.0 00:06:51.591 SYMLINK libspdk_sock_posix.so 00:06:51.853 SYMLINK libspdk_fsdev_aio.so 00:06:51.853 CC module/blobfs/bdev/blobfs_bdev.o 00:06:51.853 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:51.853 CC module/bdev/error/vbdev_error_rpc.o 00:06:51.853 CC module/bdev/error/vbdev_error.o 00:06:51.853 CC module/bdev/delay/vbdev_delay.o 00:06:51.853 CC module/bdev/gpt/gpt.o 00:06:51.853 CC module/bdev/null/bdev_null_rpc.o 00:06:51.853 CC module/bdev/null/bdev_null.o 00:06:51.853 CC module/bdev/malloc/bdev_malloc.o 00:06:51.853 CC module/bdev/gpt/vbdev_gpt.o 00:06:51.853 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:51.853 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:51.853 CC module/bdev/passthru/vbdev_passthru.o 00:06:51.853 CC module/bdev/lvol/vbdev_lvol.o 00:06:51.853 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:51.853 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:51.853 CC module/bdev/aio/bdev_aio.o 00:06:51.853 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:51.853 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:51.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:51.853 CC module/bdev/aio/bdev_aio_rpc.o 00:06:51.853 CC module/bdev/split/vbdev_split.o 00:06:51.853 CC module/bdev/iscsi/bdev_iscsi.o 00:06:51.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:51.853 CC module/bdev/split/vbdev_split_rpc.o 00:06:51.853 CC module/bdev/ftl/bdev_ftl.o 00:06:51.853 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:51.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:51.853 CC module/bdev/nvme/bdev_nvme.o 00:06:51.853 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:51.853 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:51.853 CC module/bdev/raid/bdev_raid.o 00:06:51.853 CC module/bdev/nvme/nvme_rpc.o 00:06:51.853 CC module/bdev/raid/bdev_raid_rpc.o 00:06:51.853 CC module/bdev/nvme/bdev_mdns_client.o 00:06:51.853 CC module/bdev/raid/bdev_raid_sb.o 00:06:51.853 CC module/bdev/nvme/vbdev_opal.o 00:06:51.853 CC module/bdev/raid/raid0.o 00:06:51.853 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:51.853 CC module/bdev/raid/raid1.o 00:06:51.853 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:51.853 CC module/bdev/raid/concat.o 00:06:52.113 LIB libspdk_blobfs_bdev.a 00:06:52.113 SO libspdk_blobfs_bdev.so.6.0 00:06:52.376 LIB libspdk_bdev_split.a 00:06:52.376 LIB libspdk_bdev_null.a 00:06:52.376 LIB libspdk_bdev_error.a 00:06:52.376 SYMLINK libspdk_blobfs_bdev.so 00:06:52.376 LIB libspdk_bdev_gpt.a 00:06:52.376 LIB libspdk_bdev_passthru.a 00:06:52.376 SO libspdk_bdev_null.so.6.0 00:06:52.376 SO libspdk_bdev_split.so.6.0 00:06:52.376 SO libspdk_bdev_error.so.6.0 00:06:52.376 LIB libspdk_bdev_malloc.a 00:06:52.376 SO libspdk_bdev_gpt.so.6.0 00:06:52.376 SO libspdk_bdev_passthru.so.6.0 00:06:52.376 LIB libspdk_bdev_zone_block.a 00:06:52.376 LIB libspdk_bdev_ftl.a 00:06:52.376 LIB libspdk_bdev_aio.a 00:06:52.376 SO libspdk_bdev_zone_block.so.6.0 00:06:52.376 SO libspdk_bdev_malloc.so.6.0 00:06:52.376 SYMLINK libspdk_bdev_null.so 00:06:52.376 SYMLINK libspdk_bdev_error.so 00:06:52.376 SO libspdk_bdev_ftl.so.6.0 00:06:52.376 SYMLINK libspdk_bdev_split.so 00:06:52.376 LIB libspdk_bdev_iscsi.a 00:06:52.376 SYMLINK libspdk_bdev_gpt.so 00:06:52.376 LIB libspdk_bdev_delay.a 00:06:52.376 SYMLINK libspdk_bdev_passthru.so 00:06:52.376 SO libspdk_bdev_aio.so.6.0 00:06:52.376 SO libspdk_bdev_iscsi.so.6.0 00:06:52.376 SO libspdk_bdev_delay.so.6.0 00:06:52.376 SYMLINK libspdk_bdev_ftl.so 00:06:52.376 SYMLINK libspdk_bdev_zone_block.so 00:06:52.376 SYMLINK libspdk_bdev_malloc.so 00:06:52.376 LIB libspdk_bdev_lvol.a 00:06:52.638 SYMLINK libspdk_bdev_aio.so 00:06:52.638 SYMLINK libspdk_bdev_iscsi.so 00:06:52.638 LIB libspdk_bdev_virtio.a 00:06:52.638 SYMLINK libspdk_bdev_delay.so 00:06:52.638 SO libspdk_bdev_lvol.so.6.0 00:06:52.638 SO libspdk_bdev_virtio.so.6.0 00:06:52.638 SYMLINK libspdk_bdev_lvol.so 00:06:52.638 SYMLINK libspdk_bdev_virtio.so 00:06:52.899 LIB libspdk_bdev_raid.a 00:06:52.899 SO libspdk_bdev_raid.so.6.0 00:06:53.161 SYMLINK libspdk_bdev_raid.so 00:06:54.105 LIB libspdk_bdev_nvme.a 00:06:54.105 SO libspdk_bdev_nvme.so.7.0 00:06:54.105 SYMLINK libspdk_bdev_nvme.so 00:06:55.052 CC module/event/subsystems/iobuf/iobuf.o 00:06:55.052 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:55.052 CC module/event/subsystems/sock/sock.o 00:06:55.052 CC module/event/subsystems/scheduler/scheduler.o 00:06:55.052 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:55.052 CC module/event/subsystems/keyring/keyring.o 00:06:55.052 CC module/event/subsystems/vmd/vmd.o 00:06:55.052 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:55.052 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:55.052 CC module/event/subsystems/fsdev/fsdev.o 00:06:55.052 LIB libspdk_event_fsdev.a 00:06:55.052 LIB libspdk_event_iobuf.a 00:06:55.052 LIB libspdk_event_keyring.a 00:06:55.052 LIB libspdk_event_vhost_blk.a 00:06:55.052 LIB libspdk_event_sock.a 00:06:55.052 LIB libspdk_event_vfu_tgt.a 00:06:55.052 LIB libspdk_event_scheduler.a 00:06:55.052 LIB libspdk_event_vmd.a 00:06:55.052 SO libspdk_event_fsdev.so.1.0 00:06:55.052 SO libspdk_event_iobuf.so.3.0 00:06:55.052 SO libspdk_event_keyring.so.1.0 00:06:55.052 SO libspdk_event_sock.so.5.0 00:06:55.052 SO libspdk_event_vhost_blk.so.3.0 00:06:55.052 SO libspdk_event_scheduler.so.4.0 00:06:55.052 SO libspdk_event_vfu_tgt.so.3.0 00:06:55.052 SO libspdk_event_vmd.so.6.0 00:06:55.314 SYMLINK libspdk_event_fsdev.so 00:06:55.314 SYMLINK libspdk_event_keyring.so 00:06:55.314 SYMLINK libspdk_event_sock.so 00:06:55.314 SYMLINK libspdk_event_vhost_blk.so 00:06:55.314 SYMLINK libspdk_event_vfu_tgt.so 00:06:55.314 SYMLINK libspdk_event_iobuf.so 00:06:55.314 SYMLINK libspdk_event_scheduler.so 00:06:55.314 SYMLINK libspdk_event_vmd.so 00:06:55.576 CC module/event/subsystems/accel/accel.o 00:06:55.838 LIB libspdk_event_accel.a 00:06:55.838 SO libspdk_event_accel.so.6.0 00:06:55.838 SYMLINK libspdk_event_accel.so 00:06:56.100 CC module/event/subsystems/bdev/bdev.o 00:06:56.361 LIB libspdk_event_bdev.a 00:06:56.361 SO libspdk_event_bdev.so.6.0 00:06:56.361 SYMLINK libspdk_event_bdev.so 00:06:56.933 CC module/event/subsystems/scsi/scsi.o 00:06:56.933 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:56.933 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:56.933 CC module/event/subsystems/nbd/nbd.o 00:06:56.933 CC module/event/subsystems/ublk/ublk.o 00:06:56.933 LIB libspdk_event_nbd.a 00:06:56.933 LIB libspdk_event_ublk.a 00:06:56.933 LIB libspdk_event_scsi.a 00:06:56.933 SO libspdk_event_ublk.so.3.0 00:06:56.933 SO libspdk_event_nbd.so.6.0 00:06:56.933 SO libspdk_event_scsi.so.6.0 00:06:57.195 LIB libspdk_event_nvmf.a 00:06:57.195 SYMLINK libspdk_event_nbd.so 00:06:57.195 SYMLINK libspdk_event_ublk.so 00:06:57.195 SYMLINK libspdk_event_scsi.so 00:06:57.195 SO libspdk_event_nvmf.so.6.0 00:06:57.195 SYMLINK libspdk_event_nvmf.so 00:06:57.456 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:57.456 CC module/event/subsystems/iscsi/iscsi.o 00:06:57.718 LIB libspdk_event_vhost_scsi.a 00:06:57.718 SO libspdk_event_vhost_scsi.so.3.0 00:06:57.718 LIB libspdk_event_iscsi.a 00:06:57.718 SO libspdk_event_iscsi.so.6.0 00:06:57.718 SYMLINK libspdk_event_vhost_scsi.so 00:06:57.718 SYMLINK libspdk_event_iscsi.so 00:06:57.980 SO libspdk.so.6.0 00:06:57.980 SYMLINK libspdk.so 00:06:58.242 TEST_HEADER include/spdk/accel.h 00:06:58.242 TEST_HEADER include/spdk/accel_module.h 00:06:58.242 TEST_HEADER include/spdk/assert.h 00:06:58.242 CXX app/trace/trace.o 00:06:58.242 TEST_HEADER include/spdk/base64.h 00:06:58.242 TEST_HEADER include/spdk/barrier.h 00:06:58.242 TEST_HEADER include/spdk/bdev.h 00:06:58.242 TEST_HEADER include/spdk/bdev_module.h 00:06:58.242 TEST_HEADER include/spdk/bdev_zone.h 00:06:58.242 CC app/trace_record/trace_record.o 00:06:58.242 CC test/rpc_client/rpc_client_test.o 00:06:58.242 CC app/spdk_nvme_identify/identify.o 00:06:58.242 TEST_HEADER include/spdk/bit_array.h 00:06:58.242 CC app/spdk_nvme_discover/discovery_aer.o 00:06:58.242 TEST_HEADER include/spdk/bit_pool.h 00:06:58.242 CC app/spdk_top/spdk_top.o 00:06:58.242 CC app/spdk_nvme_perf/perf.o 00:06:58.242 TEST_HEADER include/spdk/blob_bdev.h 00:06:58.242 TEST_HEADER include/spdk/blobfs.h 00:06:58.242 CC app/spdk_lspci/spdk_lspci.o 00:06:58.242 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:58.507 TEST_HEADER include/spdk/blob.h 00:06:58.507 TEST_HEADER include/spdk/conf.h 00:06:58.507 TEST_HEADER include/spdk/cpuset.h 00:06:58.507 TEST_HEADER include/spdk/config.h 00:06:58.507 TEST_HEADER include/spdk/crc16.h 00:06:58.507 TEST_HEADER include/spdk/crc32.h 00:06:58.507 TEST_HEADER include/spdk/crc64.h 00:06:58.507 TEST_HEADER include/spdk/dif.h 00:06:58.507 TEST_HEADER include/spdk/dma.h 00:06:58.507 TEST_HEADER include/spdk/endian.h 00:06:58.507 TEST_HEADER include/spdk/env_dpdk.h 00:06:58.507 TEST_HEADER include/spdk/event.h 00:06:58.507 TEST_HEADER include/spdk/env.h 00:06:58.507 TEST_HEADER include/spdk/fd_group.h 00:06:58.507 TEST_HEADER include/spdk/file.h 00:06:58.507 TEST_HEADER include/spdk/fd.h 00:06:58.507 TEST_HEADER include/spdk/fsdev.h 00:06:58.507 TEST_HEADER include/spdk/fsdev_module.h 00:06:58.507 TEST_HEADER include/spdk/ftl.h 00:06:58.507 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:58.507 TEST_HEADER include/spdk/gpt_spec.h 00:06:58.507 TEST_HEADER include/spdk/histogram_data.h 00:06:58.507 TEST_HEADER include/spdk/hexlify.h 00:06:58.507 TEST_HEADER include/spdk/idxd.h 00:06:58.507 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:58.507 TEST_HEADER include/spdk/init.h 00:06:58.507 TEST_HEADER include/spdk/idxd_spec.h 00:06:58.507 TEST_HEADER include/spdk/ioat.h 00:06:58.507 CC app/iscsi_tgt/iscsi_tgt.o 00:06:58.507 TEST_HEADER include/spdk/ioat_spec.h 00:06:58.507 TEST_HEADER include/spdk/iscsi_spec.h 00:06:58.507 TEST_HEADER include/spdk/json.h 00:06:58.507 TEST_HEADER include/spdk/keyring_module.h 00:06:58.507 CC app/spdk_dd/spdk_dd.o 00:06:58.507 TEST_HEADER include/spdk/jsonrpc.h 00:06:58.507 TEST_HEADER include/spdk/log.h 00:06:58.507 TEST_HEADER include/spdk/keyring.h 00:06:58.507 TEST_HEADER include/spdk/lvol.h 00:06:58.507 TEST_HEADER include/spdk/likely.h 00:06:58.507 TEST_HEADER include/spdk/memory.h 00:06:58.507 TEST_HEADER include/spdk/md5.h 00:06:58.507 TEST_HEADER include/spdk/mmio.h 00:06:58.507 TEST_HEADER include/spdk/nbd.h 00:06:58.507 CC app/nvmf_tgt/nvmf_main.o 00:06:58.507 TEST_HEADER include/spdk/net.h 00:06:58.507 TEST_HEADER include/spdk/notify.h 00:06:58.507 TEST_HEADER include/spdk/nvme.h 00:06:58.507 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:58.508 TEST_HEADER include/spdk/nvme_intel.h 00:06:58.508 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:58.508 TEST_HEADER include/spdk/nvme_spec.h 00:06:58.508 TEST_HEADER include/spdk/nvme_zns.h 00:06:58.508 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:58.508 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:58.508 TEST_HEADER include/spdk/nvmf.h 00:06:58.508 TEST_HEADER include/spdk/nvmf_spec.h 00:06:58.508 CC app/spdk_tgt/spdk_tgt.o 00:06:58.508 TEST_HEADER include/spdk/nvmf_transport.h 00:06:58.508 TEST_HEADER include/spdk/opal.h 00:06:58.508 TEST_HEADER include/spdk/opal_spec.h 00:06:58.508 TEST_HEADER include/spdk/pci_ids.h 00:06:58.508 TEST_HEADER include/spdk/pipe.h 00:06:58.508 TEST_HEADER include/spdk/queue.h 00:06:58.508 TEST_HEADER include/spdk/reduce.h 00:06:58.508 TEST_HEADER include/spdk/rpc.h 00:06:58.508 TEST_HEADER include/spdk/scheduler.h 00:06:58.508 TEST_HEADER include/spdk/scsi.h 00:06:58.508 TEST_HEADER include/spdk/scsi_spec.h 00:06:58.508 TEST_HEADER include/spdk/sock.h 00:06:58.508 TEST_HEADER include/spdk/stdinc.h 00:06:58.508 TEST_HEADER include/spdk/thread.h 00:06:58.508 TEST_HEADER include/spdk/string.h 00:06:58.508 TEST_HEADER include/spdk/trace.h 00:06:58.508 TEST_HEADER include/spdk/trace_parser.h 00:06:58.508 TEST_HEADER include/spdk/ublk.h 00:06:58.508 TEST_HEADER include/spdk/tree.h 00:06:58.508 TEST_HEADER include/spdk/util.h 00:06:58.508 TEST_HEADER include/spdk/uuid.h 00:06:58.508 TEST_HEADER include/spdk/version.h 00:06:58.508 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:58.508 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:58.508 TEST_HEADER include/spdk/vhost.h 00:06:58.508 TEST_HEADER include/spdk/vmd.h 00:06:58.508 TEST_HEADER include/spdk/xor.h 00:06:58.508 TEST_HEADER include/spdk/zipf.h 00:06:58.508 CXX test/cpp_headers/accel.o 00:06:58.508 CXX test/cpp_headers/accel_module.o 00:06:58.508 CXX test/cpp_headers/assert.o 00:06:58.508 CXX test/cpp_headers/barrier.o 00:06:58.508 CXX test/cpp_headers/base64.o 00:06:58.508 CXX test/cpp_headers/bdev_module.o 00:06:58.508 CXX test/cpp_headers/bdev.o 00:06:58.508 CXX test/cpp_headers/bdev_zone.o 00:06:58.508 CXX test/cpp_headers/bit_array.o 00:06:58.508 CXX test/cpp_headers/blob_bdev.o 00:06:58.508 CXX test/cpp_headers/bit_pool.o 00:06:58.508 CXX test/cpp_headers/blobfs_bdev.o 00:06:58.508 CXX test/cpp_headers/blob.o 00:06:58.508 CXX test/cpp_headers/blobfs.o 00:06:58.508 CXX test/cpp_headers/config.o 00:06:58.508 CXX test/cpp_headers/conf.o 00:06:58.508 CXX test/cpp_headers/crc16.o 00:06:58.508 CXX test/cpp_headers/crc32.o 00:06:58.508 CXX test/cpp_headers/crc64.o 00:06:58.508 CXX test/cpp_headers/cpuset.o 00:06:58.508 CXX test/cpp_headers/dif.o 00:06:58.508 CXX test/cpp_headers/dma.o 00:06:58.508 CXX test/cpp_headers/env_dpdk.o 00:06:58.508 CXX test/cpp_headers/env.o 00:06:58.508 CXX test/cpp_headers/endian.o 00:06:58.508 CXX test/cpp_headers/event.o 00:06:58.508 CXX test/cpp_headers/fd_group.o 00:06:58.508 CXX test/cpp_headers/fsdev.o 00:06:58.508 CXX test/cpp_headers/fd.o 00:06:58.508 CXX test/cpp_headers/file.o 00:06:58.508 CXX test/cpp_headers/fsdev_module.o 00:06:58.508 CXX test/cpp_headers/ftl.o 00:06:58.508 CXX test/cpp_headers/fuse_dispatcher.o 00:06:58.508 CXX test/cpp_headers/gpt_spec.o 00:06:58.508 CXX test/cpp_headers/hexlify.o 00:06:58.508 CXX test/cpp_headers/idxd.o 00:06:58.508 CXX test/cpp_headers/histogram_data.o 00:06:58.508 CXX test/cpp_headers/idxd_spec.o 00:06:58.508 CXX test/cpp_headers/ioat.o 00:06:58.508 CXX test/cpp_headers/iscsi_spec.o 00:06:58.508 CXX test/cpp_headers/init.o 00:06:58.508 CXX test/cpp_headers/json.o 00:06:58.508 CXX test/cpp_headers/ioat_spec.o 00:06:58.508 CXX test/cpp_headers/jsonrpc.o 00:06:58.508 CXX test/cpp_headers/keyring.o 00:06:58.508 CXX test/cpp_headers/keyring_module.o 00:06:58.508 CXX test/cpp_headers/lvol.o 00:06:58.508 CXX test/cpp_headers/log.o 00:06:58.508 CXX test/cpp_headers/likely.o 00:06:58.508 CXX test/cpp_headers/net.o 00:06:58.508 CXX test/cpp_headers/nbd.o 00:06:58.508 CXX test/cpp_headers/memory.o 00:06:58.508 CXX test/cpp_headers/md5.o 00:06:58.508 CXX test/cpp_headers/mmio.o 00:06:58.508 CXX test/cpp_headers/notify.o 00:06:58.508 CXX test/cpp_headers/nvme.o 00:06:58.508 CXX test/cpp_headers/nvme_intel.o 00:06:58.508 CXX test/cpp_headers/nvme_spec.o 00:06:58.508 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:58.508 CXX test/cpp_headers/nvme_zns.o 00:06:58.508 CXX test/cpp_headers/nvmf_cmd.o 00:06:58.508 CXX test/cpp_headers/nvme_ocssd.o 00:06:58.508 CXX test/cpp_headers/nvmf_transport.o 00:06:58.508 CXX test/cpp_headers/nvmf.o 00:06:58.508 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:58.508 CXX test/cpp_headers/opal.o 00:06:58.508 CXX test/cpp_headers/nvmf_spec.o 00:06:58.508 CXX test/cpp_headers/opal_spec.o 00:06:58.508 CXX test/cpp_headers/queue.o 00:06:58.508 CXX test/cpp_headers/pci_ids.o 00:06:58.508 CXX test/cpp_headers/rpc.o 00:06:58.508 CXX test/cpp_headers/pipe.o 00:06:58.508 CXX test/cpp_headers/reduce.o 00:06:58.508 CXX test/cpp_headers/sock.o 00:06:58.508 CXX test/cpp_headers/scsi_spec.o 00:06:58.508 CXX test/cpp_headers/scheduler.o 00:06:58.508 CXX test/cpp_headers/scsi.o 00:06:58.508 CC examples/util/zipf/zipf.o 00:06:58.508 CXX test/cpp_headers/thread.o 00:06:58.508 CXX test/cpp_headers/trace.o 00:06:58.783 CXX test/cpp_headers/stdinc.o 00:06:58.783 CXX test/cpp_headers/string.o 00:06:58.783 CC examples/ioat/verify/verify.o 00:06:58.783 CXX test/cpp_headers/trace_parser.o 00:06:58.783 CXX test/cpp_headers/ublk.o 00:06:58.783 CXX test/cpp_headers/util.o 00:06:58.783 CC examples/ioat/perf/perf.o 00:06:58.783 CC test/thread/poller_perf/poller_perf.o 00:06:58.783 CXX test/cpp_headers/tree.o 00:06:58.783 CXX test/cpp_headers/uuid.o 00:06:58.783 CXX test/cpp_headers/version.o 00:06:58.783 CXX test/cpp_headers/vfio_user_spec.o 00:06:58.783 CXX test/cpp_headers/vhost.o 00:06:58.783 CXX test/cpp_headers/vmd.o 00:06:58.783 CXX test/cpp_headers/vfio_user_pci.o 00:06:58.783 CXX test/cpp_headers/xor.o 00:06:58.783 CXX test/cpp_headers/zipf.o 00:06:58.783 CC test/app/jsoncat/jsoncat.o 00:06:58.783 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:58.783 CC test/app/stub/stub.o 00:06:58.783 CC test/env/vtophys/vtophys.o 00:06:58.783 CC test/env/pci/pci_ut.o 00:06:58.783 CC test/dma/test_dma/test_dma.o 00:06:58.783 CC test/env/memory/memory_ut.o 00:06:58.783 CC app/fio/nvme/fio_plugin.o 00:06:58.783 CC test/app/histogram_perf/histogram_perf.o 00:06:59.055 CC app/fio/bdev/fio_plugin.o 00:06:59.055 CC test/app/bdev_svc/bdev_svc.o 00:06:59.055 LINK spdk_lspci 00:06:59.055 LINK iscsi_tgt 00:06:59.055 LINK interrupt_tgt 00:06:59.339 LINK rpc_client_test 00:06:59.339 LINK spdk_nvme_discover 00:06:59.339 LINK ioat_perf 00:06:59.614 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:59.614 CC test/env/mem_callbacks/mem_callbacks.o 00:06:59.614 LINK spdk_trace 00:06:59.614 LINK spdk_tgt 00:06:59.614 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:59.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:59.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:59.614 LINK spdk_dd 00:06:59.614 LINK env_dpdk_post_init 00:06:59.614 LINK jsoncat 00:06:59.614 LINK nvmf_tgt 00:06:59.614 LINK spdk_trace_record 00:06:59.614 LINK verify 00:06:59.878 LINK histogram_perf 00:06:59.878 LINK bdev_svc 00:06:59.878 LINK zipf 00:06:59.878 LINK poller_perf 00:06:59.878 LINK vtophys 00:06:59.878 CC app/vhost/vhost.o 00:07:00.139 LINK stub 00:07:00.139 LINK pci_ut 00:07:00.139 LINK spdk_nvme_perf 00:07:00.139 LINK nvme_fuzz 00:07:00.139 LINK vhost_fuzz 00:07:00.139 LINK spdk_top 00:07:00.139 LINK test_dma 00:07:00.139 LINK vhost 00:07:00.401 LINK mem_callbacks 00:07:00.401 LINK spdk_nvme_identify 00:07:00.401 CC examples/idxd/perf/perf.o 00:07:00.401 CC examples/vmd/led/led.o 00:07:00.401 CC examples/vmd/lsvmd/lsvmd.o 00:07:00.401 CC examples/sock/hello_world/hello_sock.o 00:07:00.401 CC test/event/event_perf/event_perf.o 00:07:00.401 CC test/event/reactor/reactor.o 00:07:00.401 CC test/event/reactor_perf/reactor_perf.o 00:07:00.401 CC test/event/app_repeat/app_repeat.o 00:07:00.401 CC examples/thread/thread/thread_ex.o 00:07:00.401 LINK spdk_nvme 00:07:00.401 CC test/event/scheduler/scheduler.o 00:07:00.663 LINK spdk_bdev 00:07:00.663 LINK lsvmd 00:07:00.663 LINK led 00:07:00.663 LINK reactor 00:07:00.663 LINK reactor_perf 00:07:00.663 LINK event_perf 00:07:00.663 LINK app_repeat 00:07:00.663 LINK hello_sock 00:07:00.923 CC test/nvme/compliance/nvme_compliance.o 00:07:00.923 CC test/nvme/aer/aer.o 00:07:00.923 CC test/nvme/connect_stress/connect_stress.o 00:07:00.923 CC test/nvme/startup/startup.o 00:07:00.923 CC test/nvme/sgl/sgl.o 00:07:00.923 CC test/nvme/boot_partition/boot_partition.o 00:07:00.923 CC test/nvme/reset/reset.o 00:07:00.923 CC test/nvme/cuse/cuse.o 00:07:00.923 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:00.923 CC test/nvme/overhead/overhead.o 00:07:00.923 CC test/nvme/fused_ordering/fused_ordering.o 00:07:00.923 LINK idxd_perf 00:07:00.923 CC test/nvme/err_injection/err_injection.o 00:07:00.923 CC test/nvme/fdp/fdp.o 00:07:00.923 CC test/nvme/reserve/reserve.o 00:07:00.923 CC test/nvme/e2edp/nvme_dp.o 00:07:00.923 CC test/nvme/simple_copy/simple_copy.o 00:07:00.923 LINK scheduler 00:07:00.923 CC test/blobfs/mkfs/mkfs.o 00:07:00.923 CC test/accel/dif/dif.o 00:07:00.923 LINK thread 00:07:00.923 CC test/lvol/esnap/esnap.o 00:07:00.923 LINK memory_ut 00:07:00.923 LINK startup 00:07:01.183 LINK boot_partition 00:07:01.183 LINK connect_stress 00:07:01.183 LINK err_injection 00:07:01.183 LINK fused_ordering 00:07:01.183 LINK doorbell_aers 00:07:01.183 LINK mkfs 00:07:01.183 LINK reserve 00:07:01.183 LINK simple_copy 00:07:01.183 LINK sgl 00:07:01.183 LINK nvme_dp 00:07:01.183 LINK reset 00:07:01.183 LINK nvme_compliance 00:07:01.183 LINK aer 00:07:01.183 LINK overhead 00:07:01.183 LINK fdp 00:07:01.183 LINK iscsi_fuzz 00:07:01.444 CC examples/nvme/reconnect/reconnect.o 00:07:01.444 CC examples/nvme/hello_world/hello_world.o 00:07:01.444 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:01.444 CC examples/nvme/abort/abort.o 00:07:01.444 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:01.444 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:01.444 CC examples/nvme/arbitration/arbitration.o 00:07:01.444 CC examples/nvme/hotplug/hotplug.o 00:07:01.444 CC examples/accel/perf/accel_perf.o 00:07:01.444 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:01.444 LINK dif 00:07:01.444 CC examples/blob/hello_world/hello_blob.o 00:07:01.444 CC examples/blob/cli/blobcli.o 00:07:01.444 LINK pmr_persistence 00:07:01.444 LINK hello_world 00:07:01.444 LINK cmb_copy 00:07:01.705 LINK hotplug 00:07:01.706 LINK reconnect 00:07:01.706 LINK arbitration 00:07:01.706 LINK abort 00:07:01.706 LINK hello_blob 00:07:01.706 LINK nvme_manage 00:07:01.706 LINK hello_fsdev 00:07:01.967 LINK accel_perf 00:07:01.967 LINK blobcli 00:07:01.967 LINK cuse 00:07:02.229 CC test/bdev/bdevio/bdevio.o 00:07:02.490 LINK bdevio 00:07:02.490 CC examples/bdev/hello_world/hello_bdev.o 00:07:02.490 CC examples/bdev/bdevperf/bdevperf.o 00:07:02.752 LINK hello_bdev 00:07:03.326 LINK bdevperf 00:07:03.898 CC examples/nvmf/nvmf/nvmf.o 00:07:04.160 LINK nvmf 00:07:05.555 LINK esnap 00:07:05.818 00:07:05.818 real 0m54.207s 00:07:05.818 user 7m58.358s 00:07:05.818 sys 6m0.877s 00:07:05.818 17:22:57 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:07:05.818 17:22:57 make -- common/autotest_common.sh@10 -- $ set +x 00:07:05.818 ************************************ 00:07:05.818 END TEST make 00:07:05.818 ************************************ 00:07:05.818 17:22:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:05.818 17:22:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:05.818 17:22:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:05.818 17:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:05.818 17:22:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:05.818 17:22:57 -- pm/common@44 -- $ pid=40941 00:07:05.818 17:22:57 -- pm/common@50 -- $ kill -TERM 40941 00:07:05.818 17:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:05.818 17:22:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:05.818 17:22:57 -- pm/common@44 -- $ pid=40942 00:07:05.818 17:22:57 -- pm/common@50 -- $ kill -TERM 40942 00:07:05.818 17:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:05.818 17:22:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:05.818 17:22:57 -- pm/common@44 -- $ pid=40944 00:07:05.818 17:22:57 -- pm/common@50 -- $ kill -TERM 40944 00:07:05.818 17:22:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:05.818 17:22:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:05.818 17:22:57 -- pm/common@44 -- $ pid=40968 00:07:05.818 17:22:57 -- pm/common@50 -- $ sudo -E kill -TERM 40968 00:07:05.818 17:22:57 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.818 17:22:57 -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.818 17:22:57 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.081 17:22:57 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.081 17:22:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.081 17:22:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.081 17:22:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.081 17:22:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.081 17:22:57 -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.081 17:22:57 -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.081 17:22:57 -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.081 17:22:57 -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.081 17:22:57 -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.081 17:22:57 -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.081 17:22:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.081 17:22:57 -- scripts/common.sh@344 -- # case "$op" in 00:07:06.081 17:22:57 -- scripts/common.sh@345 -- # : 1 00:07:06.081 17:22:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.081 17:22:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.081 17:22:57 -- scripts/common.sh@365 -- # decimal 1 00:07:06.081 17:22:57 -- scripts/common.sh@353 -- # local d=1 00:07:06.081 17:22:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.081 17:22:57 -- scripts/common.sh@355 -- # echo 1 00:07:06.081 17:22:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.081 17:22:57 -- scripts/common.sh@366 -- # decimal 2 00:07:06.081 17:22:57 -- scripts/common.sh@353 -- # local d=2 00:07:06.081 17:22:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.081 17:22:57 -- scripts/common.sh@355 -- # echo 2 00:07:06.081 17:22:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.081 17:22:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.081 17:22:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.081 17:22:57 -- scripts/common.sh@368 -- # return 0 00:07:06.081 17:22:57 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.081 17:22:57 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.081 --rc genhtml_branch_coverage=1 00:07:06.081 --rc genhtml_function_coverage=1 00:07:06.081 --rc genhtml_legend=1 00:07:06.081 --rc geninfo_all_blocks=1 00:07:06.081 --rc geninfo_unexecuted_blocks=1 00:07:06.081 00:07:06.081 ' 00:07:06.081 17:22:57 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.081 --rc genhtml_branch_coverage=1 00:07:06.081 --rc genhtml_function_coverage=1 00:07:06.081 --rc genhtml_legend=1 00:07:06.081 --rc geninfo_all_blocks=1 00:07:06.081 --rc geninfo_unexecuted_blocks=1 00:07:06.081 00:07:06.081 ' 00:07:06.081 17:22:57 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.081 --rc genhtml_branch_coverage=1 00:07:06.081 --rc genhtml_function_coverage=1 00:07:06.081 --rc genhtml_legend=1 00:07:06.081 --rc geninfo_all_blocks=1 00:07:06.081 --rc geninfo_unexecuted_blocks=1 00:07:06.081 00:07:06.081 ' 00:07:06.081 17:22:57 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.081 --rc genhtml_branch_coverage=1 00:07:06.081 --rc genhtml_function_coverage=1 00:07:06.081 --rc genhtml_legend=1 00:07:06.081 --rc geninfo_all_blocks=1 00:07:06.081 --rc geninfo_unexecuted_blocks=1 00:07:06.081 00:07:06.081 ' 00:07:06.081 17:22:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.081 17:22:57 -- nvmf/common.sh@7 -- # uname -s 00:07:06.081 17:22:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.081 17:22:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.081 17:22:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.081 17:22:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.081 17:22:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.081 17:22:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.081 17:22:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.081 17:22:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.081 17:22:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.081 17:22:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.081 17:22:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.081 17:22:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.081 17:22:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.081 17:22:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.081 17:22:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.081 17:22:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.081 17:22:57 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.081 17:22:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.081 17:22:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.081 17:22:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.081 17:22:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.081 17:22:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.081 17:22:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.081 17:22:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.081 17:22:57 -- paths/export.sh@5 -- # export PATH 00:07:06.081 17:22:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.081 17:22:57 -- nvmf/common.sh@51 -- # : 0 00:07:06.082 17:22:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.082 17:22:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.082 17:22:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.082 17:22:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.082 17:22:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.082 17:22:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.082 17:22:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.082 17:22:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.082 17:22:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.082 17:22:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:06.082 17:22:57 -- spdk/autotest.sh@32 -- # uname -s 00:07:06.082 17:22:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:06.082 17:22:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:06.082 17:22:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:06.082 17:22:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:06.082 17:22:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:06.082 17:22:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:06.082 17:22:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:06.082 17:22:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:06.082 17:22:58 -- spdk/autotest.sh@48 -- # udevadm_pid=106897 00:07:06.082 17:22:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:06.082 17:22:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:06.082 17:22:58 -- pm/common@17 -- # local monitor 00:07:06.082 17:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.082 17:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.082 17:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.082 17:22:58 -- pm/common@21 -- # date +%s 00:07:06.082 17:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.082 17:22:58 -- pm/common@21 -- # date +%s 00:07:06.082 17:22:58 -- pm/common@25 -- # sleep 1 00:07:06.082 17:22:58 -- pm/common@21 -- # date +%s 00:07:06.082 17:22:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728400978 00:07:06.082 17:22:58 -- pm/common@21 -- # date +%s 00:07:06.082 17:22:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728400978 00:07:06.082 17:22:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728400978 00:07:06.082 17:22:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728400978 00:07:06.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728400978_collect-vmstat.pm.log 00:07:06.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728400978_collect-cpu-load.pm.log 00:07:06.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728400978_collect-cpu-temp.pm.log 00:07:06.344 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728400978_collect-bmc-pm.bmc.pm.log 00:07:07.291 17:22:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:07.291 17:22:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:07.291 17:22:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.291 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:07:07.291 17:22:59 -- spdk/autotest.sh@59 -- # create_test_list 00:07:07.291 17:22:59 -- common/autotest_common.sh@748 -- # xtrace_disable 00:07:07.291 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:07:07.291 17:22:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:07.291 17:22:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.291 17:22:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.291 17:22:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:07.291 17:22:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.291 17:22:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:07.291 17:22:59 -- common/autotest_common.sh@1455 -- # uname 00:07:07.291 17:22:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:07.291 17:22:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:07.291 17:22:59 -- common/autotest_common.sh@1475 -- # uname 00:07:07.291 17:22:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:07.291 17:22:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:07.291 17:22:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:07.291 lcov: LCOV version 1.15 00:07:07.291 17:22:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:22.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:22.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:37.359 17:23:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:37.359 17:23:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.359 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.359 17:23:29 -- spdk/autotest.sh@78 -- # rm -f 00:07:37.359 17:23:29 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:41.589 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:65:00.0 (144d a80a): Already using the nvme driver 00:07:41.589 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:07:41.589 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:07:41.589 17:23:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:41.589 17:23:33 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:41.589 17:23:33 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:41.589 17:23:33 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:41.589 17:23:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:41.589 17:23:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:41.589 17:23:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:41.589 17:23:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:41.589 17:23:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:41.589 17:23:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:41.589 17:23:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:41.589 17:23:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:41.589 17:23:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:41.589 17:23:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:41.589 17:23:33 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:41.589 No valid GPT data, bailing 00:07:41.589 17:23:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:41.589 17:23:33 -- scripts/common.sh@394 -- # pt= 00:07:41.589 17:23:33 -- scripts/common.sh@395 -- # return 1 00:07:41.589 17:23:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:41.589 1+0 records in 00:07:41.589 1+0 records out 00:07:41.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560412 s, 187 MB/s 00:07:41.589 17:23:33 -- spdk/autotest.sh@105 -- # sync 00:07:41.589 17:23:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:41.589 17:23:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:41.589 17:23:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:51.605 17:23:41 -- spdk/autotest.sh@111 -- # uname -s 00:07:51.605 17:23:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:51.605 17:23:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:51.605 17:23:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:53.594 Hugepages 00:07:53.594 node hugesize free / total 00:07:53.594 node0 1048576kB 0 / 0 00:07:53.594 node0 2048kB 0 / 0 00:07:53.594 node1 1048576kB 0 / 0 00:07:53.594 node1 2048kB 0 / 0 00:07:53.594 00:07:53.594 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:53.594 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:53.594 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:53.856 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:53.856 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:53.856 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:53.856 17:23:45 -- spdk/autotest.sh@117 -- # uname -s 00:07:53.856 17:23:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:53.856 17:23:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:53.856 17:23:45 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:58.074 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:58.074 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:59.465 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:59.727 17:23:51 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:00.672 17:23:52 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:00.672 17:23:52 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:00.673 17:23:52 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.673 17:23:52 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:00.673 17:23:52 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:00.673 17:23:52 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:00.673 17:23:52 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.673 17:23:52 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:00.673 17:23:52 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:00.673 17:23:52 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:00.673 17:23:52 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:08:00.673 17:23:52 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:04.886 Waiting for block devices as requested 00:08:04.886 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:04.886 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:05.147 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:05.147 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:05.147 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:05.409 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:05.409 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:05.409 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:05.674 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:05.674 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:05.935 17:23:57 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:05.935 17:23:57 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:08:05.935 17:23:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:05.935 17:23:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:05.935 17:23:57 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:05.935 17:23:57 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:05.935 17:23:57 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:08:05.935 17:23:57 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:05.935 17:23:57 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:05.935 17:23:57 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:05.935 17:23:57 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:05.935 17:23:57 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:05.935 17:23:57 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:05.935 17:23:57 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:05.935 17:23:57 -- common/autotest_common.sh@1541 -- # continue 00:08:05.935 17:23:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:05.935 17:23:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.935 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 17:23:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:05.935 17:23:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.935 17:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.935 17:23:57 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:10.148 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:10.148 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:10.148 17:24:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:10.148 17:24:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.148 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:08:10.148 17:24:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:10.148 17:24:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:10.148 17:24:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:10.148 17:24:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:10.148 17:24:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:10.148 17:24:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:10.148 17:24:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:10.148 17:24:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:10.148 17:24:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:10.148 17:24:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:10.148 17:24:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.148 17:24:02 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:10.148 17:24:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:10.148 17:24:02 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:10.148 17:24:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:08:10.411 17:24:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:10.411 17:24:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:10.411 17:24:02 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:08:10.411 17:24:02 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:10.411 17:24:02 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:10.411 17:24:02 -- common/autotest_common.sh@1570 -- # return 0 00:08:10.411 17:24:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:10.411 17:24:02 -- common/autotest_common.sh@1578 -- # return 0 00:08:10.411 17:24:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:10.411 17:24:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:10.411 17:24:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:10.411 17:24:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:10.411 17:24:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:10.411 17:24:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.411 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:10.411 17:24:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:10.411 17:24:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:10.412 17:24:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.412 17:24:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.412 17:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:10.412 ************************************ 00:08:10.412 START TEST env 00:08:10.412 ************************************ 00:08:10.412 17:24:02 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:10.412 * Looking for test storage... 00:08:10.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:10.412 17:24:02 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:10.412 17:24:02 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:10.412 17:24:02 env -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.412 17:24:02 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:10.412 17:24:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.412 17:24:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.412 17:24:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.412 17:24:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.412 17:24:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.412 17:24:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.412 17:24:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.412 17:24:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.412 17:24:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.412 17:24:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.412 17:24:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.412 17:24:02 env -- scripts/common.sh@344 -- # case "$op" in 00:08:10.412 17:24:02 env -- scripts/common.sh@345 -- # : 1 00:08:10.412 17:24:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.412 17:24:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.412 17:24:02 env -- scripts/common.sh@365 -- # decimal 1 00:08:10.412 17:24:02 env -- scripts/common.sh@353 -- # local d=1 00:08:10.412 17:24:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.412 17:24:02 env -- scripts/common.sh@355 -- # echo 1 00:08:10.412 17:24:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.412 17:24:02 env -- scripts/common.sh@366 -- # decimal 2 00:08:10.674 17:24:02 env -- scripts/common.sh@353 -- # local d=2 00:08:10.674 17:24:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.674 17:24:02 env -- scripts/common.sh@355 -- # echo 2 00:08:10.674 17:24:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.674 17:24:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.674 17:24:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.674 17:24:02 env -- scripts/common.sh@368 -- # return 0 00:08:10.674 17:24:02 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.674 17:24:02 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.674 --rc genhtml_branch_coverage=1 00:08:10.674 --rc genhtml_function_coverage=1 00:08:10.674 --rc genhtml_legend=1 00:08:10.674 --rc geninfo_all_blocks=1 00:08:10.674 --rc geninfo_unexecuted_blocks=1 00:08:10.674 00:08:10.674 ' 00:08:10.674 17:24:02 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.674 --rc genhtml_branch_coverage=1 00:08:10.674 --rc genhtml_function_coverage=1 00:08:10.674 --rc genhtml_legend=1 00:08:10.674 --rc geninfo_all_blocks=1 00:08:10.674 --rc geninfo_unexecuted_blocks=1 00:08:10.674 00:08:10.674 ' 00:08:10.674 17:24:02 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.674 --rc genhtml_branch_coverage=1 00:08:10.674 --rc genhtml_function_coverage=1 00:08:10.674 --rc genhtml_legend=1 00:08:10.674 --rc geninfo_all_blocks=1 00:08:10.674 --rc geninfo_unexecuted_blocks=1 00:08:10.674 00:08:10.674 ' 00:08:10.674 17:24:02 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:10.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.675 --rc genhtml_branch_coverage=1 00:08:10.675 --rc genhtml_function_coverage=1 00:08:10.675 --rc genhtml_legend=1 00:08:10.675 --rc geninfo_all_blocks=1 00:08:10.675 --rc geninfo_unexecuted_blocks=1 00:08:10.675 00:08:10.675 ' 00:08:10.675 17:24:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.675 17:24:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.675 17:24:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.675 17:24:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.675 ************************************ 00:08:10.675 START TEST env_memory 00:08:10.675 ************************************ 00:08:10.675 17:24:02 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.675 00:08:10.675 00:08:10.675 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.675 http://cunit.sourceforge.net/ 00:08:10.675 00:08:10.675 00:08:10.675 Suite: memory 00:08:10.675 Test: alloc and free memory map ...[2024-10-08 17:24:02.480944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:10.675 passed 00:08:10.675 Test: mem map translation ...[2024-10-08 17:24:02.498621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:10.675 [2024-10-08 17:24:02.498648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:10.675 [2024-10-08 17:24:02.498683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:10.675 [2024-10-08 17:24:02.498689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:10.675 passed 00:08:10.675 Test: mem map registration ...[2024-10-08 17:24:02.536757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:10.675 [2024-10-08 17:24:02.536777] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:10.675 passed 00:08:10.675 Test: mem map adjacent registrations ...passed 00:08:10.675 00:08:10.675 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.675 suites 1 1 n/a 0 0 00:08:10.675 tests 4 4 4 0 0 00:08:10.675 asserts 152 152 152 0 n/a 00:08:10.675 00:08:10.675 Elapsed time = 0.125 seconds 00:08:10.675 00:08:10.675 real 0m0.141s 00:08:10.675 user 0m0.124s 00:08:10.675 sys 0m0.015s 00:08:10.675 17:24:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.675 17:24:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:10.675 ************************************ 00:08:10.675 END TEST env_memory 00:08:10.675 ************************************ 00:08:10.675 17:24:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.675 17:24:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.675 17:24:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.675 17:24:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.675 ************************************ 00:08:10.675 START TEST env_vtophys 00:08:10.675 ************************************ 00:08:10.675 17:24:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.937 EAL: lib.eal log level changed from notice to debug 00:08:10.937 EAL: Detected lcore 0 as core 0 on socket 0 00:08:10.937 EAL: Detected lcore 1 as core 1 on socket 0 00:08:10.937 EAL: Detected lcore 2 as core 2 on socket 0 00:08:10.937 EAL: Detected lcore 3 as core 3 on socket 0 00:08:10.937 EAL: Detected lcore 4 as core 4 on socket 0 00:08:10.937 EAL: Detected lcore 5 as core 5 on socket 0 00:08:10.937 EAL: Detected lcore 6 as core 6 on socket 0 00:08:10.937 EAL: Detected lcore 7 as core 7 on socket 0 00:08:10.937 EAL: Detected lcore 8 as core 8 on socket 0 00:08:10.937 EAL: Detected lcore 9 as core 9 on socket 0 00:08:10.937 EAL: Detected lcore 10 as core 10 on socket 0 00:08:10.937 EAL: Detected lcore 11 as core 11 on socket 0 00:08:10.937 EAL: Detected lcore 12 as core 12 on socket 0 00:08:10.937 EAL: Detected lcore 13 as core 13 on socket 0 00:08:10.937 EAL: Detected lcore 14 as core 14 on socket 0 00:08:10.937 EAL: Detected lcore 15 as core 15 on socket 0 00:08:10.937 EAL: Detected lcore 16 as core 16 on socket 0 00:08:10.938 EAL: Detected lcore 17 as core 17 on socket 0 00:08:10.938 EAL: Detected lcore 18 as core 18 on socket 0 00:08:10.938 EAL: Detected lcore 19 as core 19 on socket 0 00:08:10.938 EAL: Detected lcore 20 as core 20 on socket 0 00:08:10.938 EAL: Detected lcore 21 as core 21 on socket 0 00:08:10.938 EAL: Detected lcore 22 as core 22 on socket 0 00:08:10.938 EAL: Detected lcore 23 as core 23 on socket 0 00:08:10.938 EAL: Detected lcore 24 as core 24 on socket 0 00:08:10.938 EAL: Detected lcore 25 as core 25 on socket 0 00:08:10.938 EAL: Detected lcore 26 as core 26 on socket 0 00:08:10.938 EAL: Detected lcore 27 as core 27 on socket 0 00:08:10.938 EAL: Detected lcore 28 as core 28 on socket 0 00:08:10.938 EAL: Detected lcore 29 as core 29 on socket 0 00:08:10.938 EAL: Detected lcore 30 as core 30 on socket 0 00:08:10.938 EAL: Detected lcore 31 as core 31 on socket 0 00:08:10.938 EAL: Detected lcore 32 as core 32 on socket 0 00:08:10.938 EAL: Detected lcore 33 as core 33 on socket 0 00:08:10.938 EAL: Detected lcore 34 as core 34 on socket 0 00:08:10.938 EAL: Detected lcore 35 as core 35 on socket 0 00:08:10.938 EAL: Detected lcore 36 as core 0 on socket 1 00:08:10.938 EAL: Detected lcore 37 as core 1 on socket 1 00:08:10.938 EAL: Detected lcore 38 as core 2 on socket 1 00:08:10.938 EAL: Detected lcore 39 as core 3 on socket 1 00:08:10.938 EAL: Detected lcore 40 as core 4 on socket 1 00:08:10.938 EAL: Detected lcore 41 as core 5 on socket 1 00:08:10.938 EAL: Detected lcore 42 as core 6 on socket 1 00:08:10.938 EAL: Detected lcore 43 as core 7 on socket 1 00:08:10.938 EAL: Detected lcore 44 as core 8 on socket 1 00:08:10.938 EAL: Detected lcore 45 as core 9 on socket 1 00:08:10.938 EAL: Detected lcore 46 as core 10 on socket 1 00:08:10.938 EAL: Detected lcore 47 as core 11 on socket 1 00:08:10.938 EAL: Detected lcore 48 as core 12 on socket 1 00:08:10.938 EAL: Detected lcore 49 as core 13 on socket 1 00:08:10.938 EAL: Detected lcore 50 as core 14 on socket 1 00:08:10.938 EAL: Detected lcore 51 as core 15 on socket 1 00:08:10.938 EAL: Detected lcore 52 as core 16 on socket 1 00:08:10.938 EAL: Detected lcore 53 as core 17 on socket 1 00:08:10.938 EAL: Detected lcore 54 as core 18 on socket 1 00:08:10.938 EAL: Detected lcore 55 as core 19 on socket 1 00:08:10.938 EAL: Detected lcore 56 as core 20 on socket 1 00:08:10.938 EAL: Detected lcore 57 as core 21 on socket 1 00:08:10.938 EAL: Detected lcore 58 as core 22 on socket 1 00:08:10.938 EAL: Detected lcore 59 as core 23 on socket 1 00:08:10.938 EAL: Detected lcore 60 as core 24 on socket 1 00:08:10.938 EAL: Detected lcore 61 as core 25 on socket 1 00:08:10.938 EAL: Detected lcore 62 as core 26 on socket 1 00:08:10.938 EAL: Detected lcore 63 as core 27 on socket 1 00:08:10.938 EAL: Detected lcore 64 as core 28 on socket 1 00:08:10.938 EAL: Detected lcore 65 as core 29 on socket 1 00:08:10.938 EAL: Detected lcore 66 as core 30 on socket 1 00:08:10.938 EAL: Detected lcore 67 as core 31 on socket 1 00:08:10.938 EAL: Detected lcore 68 as core 32 on socket 1 00:08:10.938 EAL: Detected lcore 69 as core 33 on socket 1 00:08:10.938 EAL: Detected lcore 70 as core 34 on socket 1 00:08:10.938 EAL: Detected lcore 71 as core 35 on socket 1 00:08:10.938 EAL: Detected lcore 72 as core 0 on socket 0 00:08:10.938 EAL: Detected lcore 73 as core 1 on socket 0 00:08:10.938 EAL: Detected lcore 74 as core 2 on socket 0 00:08:10.938 EAL: Detected lcore 75 as core 3 on socket 0 00:08:10.938 EAL: Detected lcore 76 as core 4 on socket 0 00:08:10.938 EAL: Detected lcore 77 as core 5 on socket 0 00:08:10.938 EAL: Detected lcore 78 as core 6 on socket 0 00:08:10.938 EAL: Detected lcore 79 as core 7 on socket 0 00:08:10.938 EAL: Detected lcore 80 as core 8 on socket 0 00:08:10.938 EAL: Detected lcore 81 as core 9 on socket 0 00:08:10.938 EAL: Detected lcore 82 as core 10 on socket 0 00:08:10.938 EAL: Detected lcore 83 as core 11 on socket 0 00:08:10.938 EAL: Detected lcore 84 as core 12 on socket 0 00:08:10.938 EAL: Detected lcore 85 as core 13 on socket 0 00:08:10.938 EAL: Detected lcore 86 as core 14 on socket 0 00:08:10.938 EAL: Detected lcore 87 as core 15 on socket 0 00:08:10.938 EAL: Detected lcore 88 as core 16 on socket 0 00:08:10.938 EAL: Detected lcore 89 as core 17 on socket 0 00:08:10.938 EAL: Detected lcore 90 as core 18 on socket 0 00:08:10.938 EAL: Detected lcore 91 as core 19 on socket 0 00:08:10.938 EAL: Detected lcore 92 as core 20 on socket 0 00:08:10.938 EAL: Detected lcore 93 as core 21 on socket 0 00:08:10.938 EAL: Detected lcore 94 as core 22 on socket 0 00:08:10.938 EAL: Detected lcore 95 as core 23 on socket 0 00:08:10.938 EAL: Detected lcore 96 as core 24 on socket 0 00:08:10.938 EAL: Detected lcore 97 as core 25 on socket 0 00:08:10.938 EAL: Detected lcore 98 as core 26 on socket 0 00:08:10.938 EAL: Detected lcore 99 as core 27 on socket 0 00:08:10.938 EAL: Detected lcore 100 as core 28 on socket 0 00:08:10.938 EAL: Detected lcore 101 as core 29 on socket 0 00:08:10.938 EAL: Detected lcore 102 as core 30 on socket 0 00:08:10.938 EAL: Detected lcore 103 as core 31 on socket 0 00:08:10.938 EAL: Detected lcore 104 as core 32 on socket 0 00:08:10.938 EAL: Detected lcore 105 as core 33 on socket 0 00:08:10.938 EAL: Detected lcore 106 as core 34 on socket 0 00:08:10.938 EAL: Detected lcore 107 as core 35 on socket 0 00:08:10.938 EAL: Detected lcore 108 as core 0 on socket 1 00:08:10.938 EAL: Detected lcore 109 as core 1 on socket 1 00:08:10.938 EAL: Detected lcore 110 as core 2 on socket 1 00:08:10.938 EAL: Detected lcore 111 as core 3 on socket 1 00:08:10.938 EAL: Detected lcore 112 as core 4 on socket 1 00:08:10.938 EAL: Detected lcore 113 as core 5 on socket 1 00:08:10.938 EAL: Detected lcore 114 as core 6 on socket 1 00:08:10.938 EAL: Detected lcore 115 as core 7 on socket 1 00:08:10.938 EAL: Detected lcore 116 as core 8 on socket 1 00:08:10.938 EAL: Detected lcore 117 as core 9 on socket 1 00:08:10.938 EAL: Detected lcore 118 as core 10 on socket 1 00:08:10.938 EAL: Detected lcore 119 as core 11 on socket 1 00:08:10.938 EAL: Detected lcore 120 as core 12 on socket 1 00:08:10.938 EAL: Detected lcore 121 as core 13 on socket 1 00:08:10.938 EAL: Detected lcore 122 as core 14 on socket 1 00:08:10.938 EAL: Detected lcore 123 as core 15 on socket 1 00:08:10.938 EAL: Detected lcore 124 as core 16 on socket 1 00:08:10.938 EAL: Detected lcore 125 as core 17 on socket 1 00:08:10.938 EAL: Detected lcore 126 as core 18 on socket 1 00:08:10.938 EAL: Detected lcore 127 as core 19 on socket 1 00:08:10.938 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:10.938 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:10.938 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:10.938 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:10.938 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:10.938 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:10.938 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:10.938 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:10.938 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:10.938 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:10.938 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:10.938 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:10.938 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:10.938 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:10.938 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:10.938 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:10.938 EAL: Maximum logical cores by configuration: 128 00:08:10.938 EAL: Detected CPU lcores: 128 00:08:10.938 EAL: Detected NUMA nodes: 2 00:08:10.938 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:10.938 EAL: Detected shared linkage of DPDK 00:08:10.938 EAL: No shared files mode enabled, IPC will be disabled 00:08:10.938 EAL: Bus pci wants IOVA as 'DC' 00:08:10.938 EAL: Buses did not request a specific IOVA mode. 00:08:10.938 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:10.938 EAL: Selected IOVA mode 'VA' 00:08:10.938 EAL: Probing VFIO support... 00:08:10.938 EAL: IOMMU type 1 (Type 1) is supported 00:08:10.938 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:10.938 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:10.938 EAL: VFIO support initialized 00:08:10.938 EAL: Ask a virtual area of 0x2e000 bytes 00:08:10.938 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:10.938 EAL: Setting up physically contiguous memory... 00:08:10.938 EAL: Setting maximum number of open files to 524288 00:08:10.938 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:10.938 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:10.938 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:10.938 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.938 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:10.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.938 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.938 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:10.938 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:10.938 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.939 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:10.939 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.939 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.939 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:10.939 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:10.939 EAL: Hugepages will be freed exactly as allocated. 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: TSC frequency is ~2400000 KHz 00:08:10.939 EAL: Main lcore 0 is ready (tid=7ff708c50a00;cpuset=[0]) 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 0 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 2MB 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:10.939 EAL: Mem event callback 'spdk:(nil)' registered 00:08:10.939 00:08:10.939 00:08:10.939 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.939 http://cunit.sourceforge.net/ 00:08:10.939 00:08:10.939 00:08:10.939 Suite: components_suite 00:08:10.939 Test: vtophys_malloc_test ...passed 00:08:10.939 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 4MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 4MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 6MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 6MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 10MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 10MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 18MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 18MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 34MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 34MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 66MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 66MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 130MB 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was shrunk by 130MB 00:08:10.939 EAL: Trying to obtain current memory policy. 00:08:10.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.939 EAL: Restoring previous memory policy: 4 00:08:10.939 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.939 EAL: request: mp_malloc_sync 00:08:10.939 EAL: No shared files mode enabled, IPC is disabled 00:08:10.939 EAL: Heap on socket 0 was expanded by 258MB 00:08:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.200 EAL: request: mp_malloc_sync 00:08:11.200 EAL: No shared files mode enabled, IPC is disabled 00:08:11.200 EAL: Heap on socket 0 was shrunk by 258MB 00:08:11.200 EAL: Trying to obtain current memory policy. 00:08:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.200 EAL: Restoring previous memory policy: 4 00:08:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.200 EAL: request: mp_malloc_sync 00:08:11.200 EAL: No shared files mode enabled, IPC is disabled 00:08:11.200 EAL: Heap on socket 0 was expanded by 514MB 00:08:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.200 EAL: request: mp_malloc_sync 00:08:11.200 EAL: No shared files mode enabled, IPC is disabled 00:08:11.200 EAL: Heap on socket 0 was shrunk by 514MB 00:08:11.200 EAL: Trying to obtain current memory policy. 00:08:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.461 EAL: Restoring previous memory policy: 4 00:08:11.461 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.461 EAL: request: mp_malloc_sync 00:08:11.461 EAL: No shared files mode enabled, IPC is disabled 00:08:11.461 EAL: Heap on socket 0 was expanded by 1026MB 00:08:11.461 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.723 EAL: request: mp_malloc_sync 00:08:11.723 EAL: No shared files mode enabled, IPC is disabled 00:08:11.723 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:11.723 passed 00:08:11.723 00:08:11.723 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.723 suites 1 1 n/a 0 0 00:08:11.723 tests 2 2 2 0 0 00:08:11.723 asserts 497 497 497 0 n/a 00:08:11.723 00:08:11.723 Elapsed time = 0.690 seconds 00:08:11.723 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.723 EAL: request: mp_malloc_sync 00:08:11.723 EAL: No shared files mode enabled, IPC is disabled 00:08:11.723 EAL: Heap on socket 0 was shrunk by 2MB 00:08:11.723 EAL: No shared files mode enabled, IPC is disabled 00:08:11.723 EAL: No shared files mode enabled, IPC is disabled 00:08:11.723 EAL: No shared files mode enabled, IPC is disabled 00:08:11.723 00:08:11.723 real 0m0.828s 00:08:11.723 user 0m0.430s 00:08:11.723 sys 0m0.372s 00:08:11.723 17:24:03 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.723 17:24:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 ************************************ 00:08:11.723 END TEST env_vtophys 00:08:11.723 ************************************ 00:08:11.723 17:24:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:11.723 17:24:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.723 17:24:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.723 17:24:03 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 ************************************ 00:08:11.723 START TEST env_pci 00:08:11.723 ************************************ 00:08:11.723 17:24:03 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:11.723 00:08:11.723 00:08:11.723 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.723 http://cunit.sourceforge.net/ 00:08:11.723 00:08:11.723 00:08:11.723 Suite: pci 00:08:11.723 Test: pci_hook ...[2024-10-08 17:24:03.586893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 126482 has claimed it 00:08:11.723 EAL: Cannot find device (10000:00:01.0) 00:08:11.723 EAL: Failed to attach device on primary process 00:08:11.723 passed 00:08:11.723 00:08:11.723 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.723 suites 1 1 n/a 0 0 00:08:11.723 tests 1 1 1 0 0 00:08:11.723 asserts 25 25 25 0 n/a 00:08:11.723 00:08:11.723 Elapsed time = 0.031 seconds 00:08:11.723 00:08:11.723 real 0m0.052s 00:08:11.723 user 0m0.015s 00:08:11.723 sys 0m0.037s 00:08:11.723 17:24:03 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.723 17:24:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 ************************************ 00:08:11.723 END TEST env_pci 00:08:11.723 ************************************ 00:08:11.723 17:24:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:11.723 17:24:03 env -- env/env.sh@15 -- # uname 00:08:11.723 17:24:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:11.723 17:24:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:11.723 17:24:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:11.723 17:24:03 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:11.723 17:24:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.723 17:24:03 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 ************************************ 00:08:11.723 START TEST env_dpdk_post_init 00:08:11.723 ************************************ 00:08:11.723 17:24:03 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:11.986 EAL: Detected CPU lcores: 128 00:08:11.986 EAL: Detected NUMA nodes: 2 00:08:11.986 EAL: Detected shared linkage of DPDK 00:08:11.986 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:11.986 EAL: Selected IOVA mode 'VA' 00:08:11.986 EAL: VFIO support initialized 00:08:11.986 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:11.986 EAL: Using IOMMU type 1 (Type 1) 00:08:11.986 EAL: Ignore mapping IO port bar(1) 00:08:12.248 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:12.248 EAL: Ignore mapping IO port bar(1) 00:08:12.511 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:12.511 EAL: Ignore mapping IO port bar(1) 00:08:12.772 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:12.772 EAL: Ignore mapping IO port bar(1) 00:08:12.772 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:13.034 EAL: Ignore mapping IO port bar(1) 00:08:13.034 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:13.296 EAL: Ignore mapping IO port bar(1) 00:08:13.296 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:13.558 EAL: Ignore mapping IO port bar(1) 00:08:13.558 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:13.558 EAL: Ignore mapping IO port bar(1) 00:08:13.819 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:08:14.081 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:08:14.081 EAL: Ignore mapping IO port bar(1) 00:08:14.342 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:08:14.342 EAL: Ignore mapping IO port bar(1) 00:08:14.342 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:08:14.603 EAL: Ignore mapping IO port bar(1) 00:08:14.603 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:08:14.865 EAL: Ignore mapping IO port bar(1) 00:08:14.865 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:08:15.127 EAL: Ignore mapping IO port bar(1) 00:08:15.127 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:08:15.127 EAL: Ignore mapping IO port bar(1) 00:08:15.388 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:08:15.388 EAL: Ignore mapping IO port bar(1) 00:08:15.650 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:08:15.650 EAL: Ignore mapping IO port bar(1) 00:08:15.912 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:08:15.912 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:08:15.912 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:08:15.912 Starting DPDK initialization... 00:08:15.912 Starting SPDK post initialization... 00:08:15.912 SPDK NVMe probe 00:08:15.912 Attaching to 0000:65:00.0 00:08:15.912 Attached to 0000:65:00.0 00:08:15.912 Cleaning up... 00:08:17.834 00:08:17.834 real 0m5.738s 00:08:17.834 user 0m0.105s 00:08:17.834 sys 0m0.188s 00:08:17.834 17:24:09 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.834 17:24:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 ************************************ 00:08:17.834 END TEST env_dpdk_post_init 00:08:17.834 ************************************ 00:08:17.834 17:24:09 env -- env/env.sh@26 -- # uname 00:08:17.834 17:24:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:17.834 17:24:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.834 17:24:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.834 17:24:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.834 17:24:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 ************************************ 00:08:17.834 START TEST env_mem_callbacks 00:08:17.834 ************************************ 00:08:17.834 17:24:09 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.834 EAL: Detected CPU lcores: 128 00:08:17.834 EAL: Detected NUMA nodes: 2 00:08:17.834 EAL: Detected shared linkage of DPDK 00:08:17.834 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.834 EAL: Selected IOVA mode 'VA' 00:08:17.834 EAL: VFIO support initialized 00:08:17.834 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.834 00:08:17.834 00:08:17.834 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.834 http://cunit.sourceforge.net/ 00:08:17.834 00:08:17.834 00:08:17.834 Suite: memory 00:08:17.834 Test: test ... 00:08:17.834 register 0x200000200000 2097152 00:08:17.834 malloc 3145728 00:08:17.834 register 0x200000400000 4194304 00:08:17.834 buf 0x200000500000 len 3145728 PASSED 00:08:17.834 malloc 64 00:08:17.834 buf 0x2000004fff40 len 64 PASSED 00:08:17.834 malloc 4194304 00:08:17.834 register 0x200000800000 6291456 00:08:17.834 buf 0x200000a00000 len 4194304 PASSED 00:08:17.834 free 0x200000500000 3145728 00:08:17.834 free 0x2000004fff40 64 00:08:17.834 unregister 0x200000400000 4194304 PASSED 00:08:17.834 free 0x200000a00000 4194304 00:08:17.834 unregister 0x200000800000 6291456 PASSED 00:08:17.834 malloc 8388608 00:08:17.834 register 0x200000400000 10485760 00:08:17.834 buf 0x200000600000 len 8388608 PASSED 00:08:17.834 free 0x200000600000 8388608 00:08:17.834 unregister 0x200000400000 10485760 PASSED 00:08:17.834 passed 00:08:17.834 00:08:17.834 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.834 suites 1 1 n/a 0 0 00:08:17.834 tests 1 1 1 0 0 00:08:17.834 asserts 15 15 15 0 n/a 00:08:17.834 00:08:17.834 Elapsed time = 0.010 seconds 00:08:17.834 00:08:17.834 real 0m0.069s 00:08:17.834 user 0m0.027s 00:08:17.834 sys 0m0.042s 00:08:17.834 17:24:09 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.834 17:24:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 ************************************ 00:08:17.834 END TEST env_mem_callbacks 00:08:17.834 ************************************ 00:08:17.834 00:08:17.834 real 0m7.441s 00:08:17.834 user 0m0.954s 00:08:17.834 sys 0m1.045s 00:08:17.834 17:24:09 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.834 17:24:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 ************************************ 00:08:17.834 END TEST env 00:08:17.834 ************************************ 00:08:17.834 17:24:09 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.834 17:24:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.834 17:24:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.834 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 ************************************ 00:08:17.834 START TEST rpc 00:08:17.834 ************************************ 00:08:17.834 17:24:09 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.834 * Looking for test storage... 00:08:18.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:18.096 17:24:09 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.096 17:24:09 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.096 17:24:09 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:18.096 17:24:09 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.096 17:24:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.096 17:24:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.096 17:24:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.096 17:24:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.096 17:24:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.096 17:24:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:18.096 17:24:09 rpc -- scripts/common.sh@345 -- # : 1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.096 17:24:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.096 17:24:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@353 -- # local d=1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.096 17:24:09 rpc -- scripts/common.sh@355 -- # echo 1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.096 17:24:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@353 -- # local d=2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.096 17:24:09 rpc -- scripts/common.sh@355 -- # echo 2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.096 17:24:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.096 17:24:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.096 17:24:09 rpc -- scripts/common.sh@368 -- # return 0 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 17:24:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=128171 00:08:18.097 17:24:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:18.097 17:24:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 128171 00:08:18.097 17:24:09 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@831 -- # '[' -z 128171 ']' 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.097 17:24:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.097 [2024-10-08 17:24:10.008307] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:18.097 [2024-10-08 17:24:10.008385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128171 ] 00:08:18.359 [2024-10-08 17:24:10.090576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.359 [2024-10-08 17:24:10.187512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:18.359 [2024-10-08 17:24:10.187575] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 128171' to capture a snapshot of events at runtime. 00:08:18.359 [2024-10-08 17:24:10.187584] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.359 [2024-10-08 17:24:10.187592] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.359 [2024-10-08 17:24:10.187599] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid128171 for offline analysis/debug. 00:08:18.359 [2024-10-08 17:24:10.188418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.933 17:24:10 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.933 17:24:10 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:18.933 17:24:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:18.933 17:24:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:18.933 17:24:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:18.933 17:24:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:18.933 17:24:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.933 17:24:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.933 17:24:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.933 ************************************ 00:08:18.933 START TEST rpc_integrity 00:08:18.933 ************************************ 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.933 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.933 { 00:08:18.933 "name": "Malloc0", 00:08:18.933 "aliases": [ 00:08:18.933 "7c524c74-1605-412a-a6f1-c67547d213b5" 00:08:18.933 ], 00:08:18.933 "product_name": "Malloc disk", 00:08:18.933 "block_size": 512, 00:08:18.933 "num_blocks": 16384, 00:08:18.933 "uuid": "7c524c74-1605-412a-a6f1-c67547d213b5", 00:08:18.933 "assigned_rate_limits": { 00:08:18.933 "rw_ios_per_sec": 0, 00:08:18.933 "rw_mbytes_per_sec": 0, 00:08:18.933 "r_mbytes_per_sec": 0, 00:08:18.933 "w_mbytes_per_sec": 0 00:08:18.933 }, 00:08:18.933 "claimed": false, 00:08:18.933 "zoned": false, 00:08:18.933 "supported_io_types": { 00:08:18.933 "read": true, 00:08:18.933 "write": true, 00:08:18.933 "unmap": true, 00:08:18.933 "flush": true, 00:08:18.933 "reset": true, 00:08:18.933 "nvme_admin": false, 00:08:18.933 "nvme_io": false, 00:08:18.933 "nvme_io_md": false, 00:08:18.933 "write_zeroes": true, 00:08:18.933 "zcopy": true, 00:08:18.933 "get_zone_info": false, 00:08:18.933 "zone_management": false, 00:08:18.933 "zone_append": false, 00:08:18.933 "compare": false, 00:08:18.933 "compare_and_write": false, 00:08:18.933 "abort": true, 00:08:18.933 "seek_hole": false, 00:08:18.933 "seek_data": false, 00:08:18.933 "copy": true, 00:08:18.933 "nvme_iov_md": false 00:08:18.933 }, 00:08:18.933 "memory_domains": [ 00:08:18.933 { 00:08:18.933 "dma_device_id": "system", 00:08:18.933 "dma_device_type": 1 00:08:18.933 }, 00:08:18.933 { 00:08:18.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.933 "dma_device_type": 2 00:08:18.933 } 00:08:18.933 ], 00:08:18.933 "driver_specific": {} 00:08:18.933 } 00:08:18.933 ]' 00:08:18.933 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:19.195 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:19.195 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:19.195 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.195 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 [2024-10-08 17:24:10.977356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:19.195 [2024-10-08 17:24:10.977403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.195 [2024-10-08 17:24:10.977418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf24c40 00:08:19.195 [2024-10-08 17:24:10.977426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.195 [2024-10-08 17:24:10.978988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.195 [2024-10-08 17:24:10.979023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:19.195 Passthru0 00:08:19.195 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.195 17:24:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:19.195 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.195 17:24:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.195 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:19.195 { 00:08:19.195 "name": "Malloc0", 00:08:19.195 "aliases": [ 00:08:19.195 "7c524c74-1605-412a-a6f1-c67547d213b5" 00:08:19.195 ], 00:08:19.195 "product_name": "Malloc disk", 00:08:19.195 "block_size": 512, 00:08:19.195 "num_blocks": 16384, 00:08:19.195 "uuid": "7c524c74-1605-412a-a6f1-c67547d213b5", 00:08:19.195 "assigned_rate_limits": { 00:08:19.195 "rw_ios_per_sec": 0, 00:08:19.195 "rw_mbytes_per_sec": 0, 00:08:19.195 "r_mbytes_per_sec": 0, 00:08:19.195 "w_mbytes_per_sec": 0 00:08:19.195 }, 00:08:19.195 "claimed": true, 00:08:19.195 "claim_type": "exclusive_write", 00:08:19.195 "zoned": false, 00:08:19.195 "supported_io_types": { 00:08:19.195 "read": true, 00:08:19.195 "write": true, 00:08:19.195 "unmap": true, 00:08:19.195 "flush": true, 00:08:19.195 "reset": true, 00:08:19.195 "nvme_admin": false, 00:08:19.195 "nvme_io": false, 00:08:19.195 "nvme_io_md": false, 00:08:19.195 "write_zeroes": true, 00:08:19.195 "zcopy": true, 00:08:19.195 "get_zone_info": false, 00:08:19.195 "zone_management": false, 00:08:19.195 "zone_append": false, 00:08:19.195 "compare": false, 00:08:19.195 "compare_and_write": false, 00:08:19.195 "abort": true, 00:08:19.195 "seek_hole": false, 00:08:19.195 "seek_data": false, 00:08:19.195 "copy": true, 00:08:19.195 "nvme_iov_md": false 00:08:19.195 }, 00:08:19.195 "memory_domains": [ 00:08:19.195 { 00:08:19.195 "dma_device_id": "system", 00:08:19.195 "dma_device_type": 1 00:08:19.195 }, 00:08:19.195 { 00:08:19.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.195 "dma_device_type": 2 00:08:19.196 } 00:08:19.196 ], 00:08:19.196 "driver_specific": {} 00:08:19.196 }, 00:08:19.196 { 00:08:19.196 "name": "Passthru0", 00:08:19.196 "aliases": [ 00:08:19.196 "c4b01f6f-f047-5b10-be31-3235e3649a27" 00:08:19.196 ], 00:08:19.196 "product_name": "passthru", 00:08:19.196 "block_size": 512, 00:08:19.196 "num_blocks": 16384, 00:08:19.196 "uuid": "c4b01f6f-f047-5b10-be31-3235e3649a27", 00:08:19.196 "assigned_rate_limits": { 00:08:19.196 "rw_ios_per_sec": 0, 00:08:19.196 "rw_mbytes_per_sec": 0, 00:08:19.196 "r_mbytes_per_sec": 0, 00:08:19.196 "w_mbytes_per_sec": 0 00:08:19.196 }, 00:08:19.196 "claimed": false, 00:08:19.196 "zoned": false, 00:08:19.196 "supported_io_types": { 00:08:19.196 "read": true, 00:08:19.196 "write": true, 00:08:19.196 "unmap": true, 00:08:19.196 "flush": true, 00:08:19.196 "reset": true, 00:08:19.196 "nvme_admin": false, 00:08:19.196 "nvme_io": false, 00:08:19.196 "nvme_io_md": false, 00:08:19.196 "write_zeroes": true, 00:08:19.196 "zcopy": true, 00:08:19.196 "get_zone_info": false, 00:08:19.196 "zone_management": false, 00:08:19.196 "zone_append": false, 00:08:19.196 "compare": false, 00:08:19.196 "compare_and_write": false, 00:08:19.196 "abort": true, 00:08:19.196 "seek_hole": false, 00:08:19.196 "seek_data": false, 00:08:19.196 "copy": true, 00:08:19.196 "nvme_iov_md": false 00:08:19.196 }, 00:08:19.196 "memory_domains": [ 00:08:19.196 { 00:08:19.196 "dma_device_id": "system", 00:08:19.196 "dma_device_type": 1 00:08:19.196 }, 00:08:19.196 { 00:08:19.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.196 "dma_device_type": 2 00:08:19.196 } 00:08:19.196 ], 00:08:19.196 "driver_specific": { 00:08:19.196 "passthru": { 00:08:19.196 "name": "Passthru0", 00:08:19.196 "base_bdev_name": "Malloc0" 00:08:19.196 } 00:08:19.196 } 00:08:19.196 } 00:08:19.196 ]' 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:19.196 17:24:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:19.196 00:08:19.196 real 0m0.309s 00:08:19.196 user 0m0.186s 00:08:19.196 sys 0m0.051s 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.196 17:24:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 ************************************ 00:08:19.196 END TEST rpc_integrity 00:08:19.196 ************************************ 00:08:19.196 17:24:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:19.196 17:24:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.196 17:24:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.196 17:24:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 ************************************ 00:08:19.458 START TEST rpc_plugins 00:08:19.458 ************************************ 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:19.458 { 00:08:19.458 "name": "Malloc1", 00:08:19.458 "aliases": [ 00:08:19.458 "69cf5932-3827-47c2-b5eb-9220c7da1ef6" 00:08:19.458 ], 00:08:19.458 "product_name": "Malloc disk", 00:08:19.458 "block_size": 4096, 00:08:19.458 "num_blocks": 256, 00:08:19.458 "uuid": "69cf5932-3827-47c2-b5eb-9220c7da1ef6", 00:08:19.458 "assigned_rate_limits": { 00:08:19.458 "rw_ios_per_sec": 0, 00:08:19.458 "rw_mbytes_per_sec": 0, 00:08:19.458 "r_mbytes_per_sec": 0, 00:08:19.458 "w_mbytes_per_sec": 0 00:08:19.458 }, 00:08:19.458 "claimed": false, 00:08:19.458 "zoned": false, 00:08:19.458 "supported_io_types": { 00:08:19.458 "read": true, 00:08:19.458 "write": true, 00:08:19.458 "unmap": true, 00:08:19.458 "flush": true, 00:08:19.458 "reset": true, 00:08:19.458 "nvme_admin": false, 00:08:19.458 "nvme_io": false, 00:08:19.458 "nvme_io_md": false, 00:08:19.458 "write_zeroes": true, 00:08:19.458 "zcopy": true, 00:08:19.458 "get_zone_info": false, 00:08:19.458 "zone_management": false, 00:08:19.458 "zone_append": false, 00:08:19.458 "compare": false, 00:08:19.458 "compare_and_write": false, 00:08:19.458 "abort": true, 00:08:19.458 "seek_hole": false, 00:08:19.458 "seek_data": false, 00:08:19.458 "copy": true, 00:08:19.458 "nvme_iov_md": false 00:08:19.458 }, 00:08:19.458 "memory_domains": [ 00:08:19.458 { 00:08:19.458 "dma_device_id": "system", 00:08:19.458 "dma_device_type": 1 00:08:19.458 }, 00:08:19.458 { 00:08:19.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.458 "dma_device_type": 2 00:08:19.458 } 00:08:19.458 ], 00:08:19.458 "driver_specific": {} 00:08:19.458 } 00:08:19.458 ]' 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:19.458 17:24:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:19.458 00:08:19.458 real 0m0.158s 00:08:19.458 user 0m0.093s 00:08:19.458 sys 0m0.026s 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.458 17:24:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:19.458 ************************************ 00:08:19.458 END TEST rpc_plugins 00:08:19.458 ************************************ 00:08:19.458 17:24:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:19.458 17:24:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.458 17:24:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.458 17:24:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.719 ************************************ 00:08:19.719 START TEST rpc_trace_cmd_test 00:08:19.719 ************************************ 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:19.720 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid128171", 00:08:19.720 "tpoint_group_mask": "0x8", 00:08:19.720 "iscsi_conn": { 00:08:19.720 "mask": "0x2", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "scsi": { 00:08:19.720 "mask": "0x4", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "bdev": { 00:08:19.720 "mask": "0x8", 00:08:19.720 "tpoint_mask": "0xffffffffffffffff" 00:08:19.720 }, 00:08:19.720 "nvmf_rdma": { 00:08:19.720 "mask": "0x10", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "nvmf_tcp": { 00:08:19.720 "mask": "0x20", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "ftl": { 00:08:19.720 "mask": "0x40", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "blobfs": { 00:08:19.720 "mask": "0x80", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "dsa": { 00:08:19.720 "mask": "0x200", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "thread": { 00:08:19.720 "mask": "0x400", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "nvme_pcie": { 00:08:19.720 "mask": "0x800", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "iaa": { 00:08:19.720 "mask": "0x1000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "nvme_tcp": { 00:08:19.720 "mask": "0x2000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "bdev_nvme": { 00:08:19.720 "mask": "0x4000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "sock": { 00:08:19.720 "mask": "0x8000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "blob": { 00:08:19.720 "mask": "0x10000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "bdev_raid": { 00:08:19.720 "mask": "0x20000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 }, 00:08:19.720 "scheduler": { 00:08:19.720 "mask": "0x40000", 00:08:19.720 "tpoint_mask": "0x0" 00:08:19.720 } 00:08:19.720 }' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:19.720 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:19.982 17:24:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:19.982 00:08:19.982 real 0m0.257s 00:08:19.982 user 0m0.211s 00:08:19.982 sys 0m0.033s 00:08:19.982 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.982 17:24:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 ************************************ 00:08:19.982 END TEST rpc_trace_cmd_test 00:08:19.982 ************************************ 00:08:19.982 17:24:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:19.982 17:24:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:19.982 17:24:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:19.982 17:24:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.982 17:24:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.982 17:24:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 ************************************ 00:08:19.982 START TEST rpc_daemon_integrity 00:08:19.982 ************************************ 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:19.982 { 00:08:19.982 "name": "Malloc2", 00:08:19.982 "aliases": [ 00:08:19.982 "1462c6d5-3288-45fe-9c1f-6d9345c48dac" 00:08:19.982 ], 00:08:19.982 "product_name": "Malloc disk", 00:08:19.982 "block_size": 512, 00:08:19.982 "num_blocks": 16384, 00:08:19.982 "uuid": "1462c6d5-3288-45fe-9c1f-6d9345c48dac", 00:08:19.982 "assigned_rate_limits": { 00:08:19.982 "rw_ios_per_sec": 0, 00:08:19.982 "rw_mbytes_per_sec": 0, 00:08:19.982 "r_mbytes_per_sec": 0, 00:08:19.982 "w_mbytes_per_sec": 0 00:08:19.982 }, 00:08:19.982 "claimed": false, 00:08:19.982 "zoned": false, 00:08:19.982 "supported_io_types": { 00:08:19.982 "read": true, 00:08:19.982 "write": true, 00:08:19.982 "unmap": true, 00:08:19.982 "flush": true, 00:08:19.982 "reset": true, 00:08:19.982 "nvme_admin": false, 00:08:19.982 "nvme_io": false, 00:08:19.982 "nvme_io_md": false, 00:08:19.982 "write_zeroes": true, 00:08:19.982 "zcopy": true, 00:08:19.982 "get_zone_info": false, 00:08:19.982 "zone_management": false, 00:08:19.982 "zone_append": false, 00:08:19.982 "compare": false, 00:08:19.982 "compare_and_write": false, 00:08:19.982 "abort": true, 00:08:19.982 "seek_hole": false, 00:08:19.982 "seek_data": false, 00:08:19.982 "copy": true, 00:08:19.982 "nvme_iov_md": false 00:08:19.982 }, 00:08:19.982 "memory_domains": [ 00:08:19.982 { 00:08:19.982 "dma_device_id": "system", 00:08:19.982 "dma_device_type": 1 00:08:19.982 }, 00:08:19.982 { 00:08:19.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.982 "dma_device_type": 2 00:08:19.982 } 00:08:19.982 ], 00:08:19.982 "driver_specific": {} 00:08:19.982 } 00:08:19.982 ]' 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.982 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 [2024-10-08 17:24:11.943965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:19.982 [2024-10-08 17:24:11.944015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.983 [2024-10-08 17:24:11.944034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf24ec0 00:08:19.983 [2024-10-08 17:24:11.944042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.983 [2024-10-08 17:24:11.945492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.983 [2024-10-08 17:24:11.945536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:19.983 Passthru0 00:08:19.983 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.983 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:19.983 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.983 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.244 17:24:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.244 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:20.244 { 00:08:20.244 "name": "Malloc2", 00:08:20.244 "aliases": [ 00:08:20.244 "1462c6d5-3288-45fe-9c1f-6d9345c48dac" 00:08:20.244 ], 00:08:20.244 "product_name": "Malloc disk", 00:08:20.244 "block_size": 512, 00:08:20.244 "num_blocks": 16384, 00:08:20.244 "uuid": "1462c6d5-3288-45fe-9c1f-6d9345c48dac", 00:08:20.244 "assigned_rate_limits": { 00:08:20.244 "rw_ios_per_sec": 0, 00:08:20.244 "rw_mbytes_per_sec": 0, 00:08:20.244 "r_mbytes_per_sec": 0, 00:08:20.244 "w_mbytes_per_sec": 0 00:08:20.244 }, 00:08:20.244 "claimed": true, 00:08:20.244 "claim_type": "exclusive_write", 00:08:20.244 "zoned": false, 00:08:20.244 "supported_io_types": { 00:08:20.244 "read": true, 00:08:20.244 "write": true, 00:08:20.244 "unmap": true, 00:08:20.244 "flush": true, 00:08:20.244 "reset": true, 00:08:20.244 "nvme_admin": false, 00:08:20.244 "nvme_io": false, 00:08:20.244 "nvme_io_md": false, 00:08:20.244 "write_zeroes": true, 00:08:20.244 "zcopy": true, 00:08:20.244 "get_zone_info": false, 00:08:20.244 "zone_management": false, 00:08:20.244 "zone_append": false, 00:08:20.244 "compare": false, 00:08:20.244 "compare_and_write": false, 00:08:20.244 "abort": true, 00:08:20.244 "seek_hole": false, 00:08:20.244 "seek_data": false, 00:08:20.244 "copy": true, 00:08:20.244 "nvme_iov_md": false 00:08:20.244 }, 00:08:20.244 "memory_domains": [ 00:08:20.244 { 00:08:20.244 "dma_device_id": "system", 00:08:20.244 "dma_device_type": 1 00:08:20.244 }, 00:08:20.244 { 00:08:20.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.244 "dma_device_type": 2 00:08:20.244 } 00:08:20.244 ], 00:08:20.244 "driver_specific": {} 00:08:20.244 }, 00:08:20.244 { 00:08:20.244 "name": "Passthru0", 00:08:20.244 "aliases": [ 00:08:20.244 "4418efd4-234e-505e-b72a-db3755f6864c" 00:08:20.244 ], 00:08:20.244 "product_name": "passthru", 00:08:20.244 "block_size": 512, 00:08:20.244 "num_blocks": 16384, 00:08:20.244 "uuid": "4418efd4-234e-505e-b72a-db3755f6864c", 00:08:20.244 "assigned_rate_limits": { 00:08:20.244 "rw_ios_per_sec": 0, 00:08:20.244 "rw_mbytes_per_sec": 0, 00:08:20.244 "r_mbytes_per_sec": 0, 00:08:20.244 "w_mbytes_per_sec": 0 00:08:20.244 }, 00:08:20.244 "claimed": false, 00:08:20.244 "zoned": false, 00:08:20.244 "supported_io_types": { 00:08:20.244 "read": true, 00:08:20.244 "write": true, 00:08:20.244 "unmap": true, 00:08:20.244 "flush": true, 00:08:20.244 "reset": true, 00:08:20.244 "nvme_admin": false, 00:08:20.244 "nvme_io": false, 00:08:20.244 "nvme_io_md": false, 00:08:20.244 "write_zeroes": true, 00:08:20.244 "zcopy": true, 00:08:20.244 "get_zone_info": false, 00:08:20.244 "zone_management": false, 00:08:20.244 "zone_append": false, 00:08:20.245 "compare": false, 00:08:20.245 "compare_and_write": false, 00:08:20.245 "abort": true, 00:08:20.245 "seek_hole": false, 00:08:20.245 "seek_data": false, 00:08:20.245 "copy": true, 00:08:20.245 "nvme_iov_md": false 00:08:20.245 }, 00:08:20.245 "memory_domains": [ 00:08:20.245 { 00:08:20.245 "dma_device_id": "system", 00:08:20.245 "dma_device_type": 1 00:08:20.245 }, 00:08:20.245 { 00:08:20.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.245 "dma_device_type": 2 00:08:20.245 } 00:08:20.245 ], 00:08:20.245 "driver_specific": { 00:08:20.245 "passthru": { 00:08:20.245 "name": "Passthru0", 00:08:20.245 "base_bdev_name": "Malloc2" 00:08:20.245 } 00:08:20.245 } 00:08:20.245 } 00:08:20.245 ]' 00:08:20.245 17:24:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:20.245 00:08:20.245 real 0m0.297s 00:08:20.245 user 0m0.180s 00:08:20.245 sys 0m0.049s 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.245 17:24:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.245 ************************************ 00:08:20.245 END TEST rpc_daemon_integrity 00:08:20.245 ************************************ 00:08:20.245 17:24:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:20.245 17:24:12 rpc -- rpc/rpc.sh@84 -- # killprocess 128171 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@950 -- # '[' -z 128171 ']' 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@954 -- # kill -0 128171 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@955 -- # uname 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128171 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128171' 00:08:20.245 killing process with pid 128171 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@969 -- # kill 128171 00:08:20.245 17:24:12 rpc -- common/autotest_common.sh@974 -- # wait 128171 00:08:20.507 00:08:20.507 real 0m2.739s 00:08:20.507 user 0m3.406s 00:08:20.507 sys 0m0.898s 00:08:20.507 17:24:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.507 17:24:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.507 ************************************ 00:08:20.507 END TEST rpc 00:08:20.507 ************************************ 00:08:20.769 17:24:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:20.769 17:24:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.769 17:24:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.769 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:08:20.769 ************************************ 00:08:20.769 START TEST skip_rpc 00:08:20.769 ************************************ 00:08:20.769 17:24:12 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:20.769 * Looking for test storage... 00:08:20.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.770 17:24:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.770 --rc genhtml_branch_coverage=1 00:08:20.770 --rc genhtml_function_coverage=1 00:08:20.770 --rc genhtml_legend=1 00:08:20.770 --rc geninfo_all_blocks=1 00:08:20.770 --rc geninfo_unexecuted_blocks=1 00:08:20.770 00:08:20.770 ' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.770 --rc genhtml_branch_coverage=1 00:08:20.770 --rc genhtml_function_coverage=1 00:08:20.770 --rc genhtml_legend=1 00:08:20.770 --rc geninfo_all_blocks=1 00:08:20.770 --rc geninfo_unexecuted_blocks=1 00:08:20.770 00:08:20.770 ' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.770 --rc genhtml_branch_coverage=1 00:08:20.770 --rc genhtml_function_coverage=1 00:08:20.770 --rc genhtml_legend=1 00:08:20.770 --rc geninfo_all_blocks=1 00:08:20.770 --rc geninfo_unexecuted_blocks=1 00:08:20.770 00:08:20.770 ' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.770 --rc genhtml_branch_coverage=1 00:08:20.770 --rc genhtml_function_coverage=1 00:08:20.770 --rc genhtml_legend=1 00:08:20.770 --rc geninfo_all_blocks=1 00:08:20.770 --rc geninfo_unexecuted_blocks=1 00:08:20.770 00:08:20.770 ' 00:08:20.770 17:24:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:20.770 17:24:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:20.770 17:24:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.770 17:24:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.032 ************************************ 00:08:21.032 START TEST skip_rpc 00:08:21.032 ************************************ 00:08:21.032 17:24:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:21.032 17:24:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=129137 00:08:21.032 17:24:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.032 17:24:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:21.032 17:24:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:21.032 [2024-10-08 17:24:12.844085] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:21.032 [2024-10-08 17:24:12.844146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129137 ] 00:08:21.032 [2024-10-08 17:24:12.927048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.032 [2024-10-08 17:24:13.021216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 129137 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 129137 ']' 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 129137 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129137 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129137' 00:08:26.324 killing process with pid 129137 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 129137 00:08:26.324 17:24:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 129137 00:08:26.324 00:08:26.324 real 0m5.281s 00:08:26.324 user 0m5.019s 00:08:26.324 sys 0m0.298s 00:08:26.324 17:24:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.324 17:24:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.324 ************************************ 00:08:26.324 END TEST skip_rpc 00:08:26.324 ************************************ 00:08:26.324 17:24:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:26.324 17:24:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.324 17:24:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.324 17:24:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.324 ************************************ 00:08:26.324 START TEST skip_rpc_with_json 00:08:26.324 ************************************ 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=130178 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 130178 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 130178 ']' 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.324 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.324 [2024-10-08 17:24:18.196138] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:26.324 [2024-10-08 17:24:18.196186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130178 ] 00:08:26.324 [2024-10-08 17:24:18.274709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.586 [2024-10-08 17:24:18.328841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.159 [2024-10-08 17:24:18.980359] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:27.159 request: 00:08:27.159 { 00:08:27.159 "trtype": "tcp", 00:08:27.159 "method": "nvmf_get_transports", 00:08:27.159 "req_id": 1 00:08:27.159 } 00:08:27.159 Got JSON-RPC error response 00:08:27.159 response: 00:08:27.159 { 00:08:27.159 "code": -19, 00:08:27.159 "message": "No such device" 00:08:27.159 } 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.159 [2024-10-08 17:24:18.992456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.159 17:24:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.421 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.421 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:27.421 { 00:08:27.421 "subsystems": [ 00:08:27.421 { 00:08:27.421 "subsystem": "fsdev", 00:08:27.421 "config": [ 00:08:27.421 { 00:08:27.421 "method": "fsdev_set_opts", 00:08:27.421 "params": { 00:08:27.421 "fsdev_io_pool_size": 65535, 00:08:27.421 "fsdev_io_cache_size": 256 00:08:27.421 } 00:08:27.421 } 00:08:27.421 ] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "vfio_user_target", 00:08:27.421 "config": null 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "keyring", 00:08:27.421 "config": [] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "iobuf", 00:08:27.421 "config": [ 00:08:27.421 { 00:08:27.421 "method": "iobuf_set_options", 00:08:27.421 "params": { 00:08:27.421 "small_pool_count": 8192, 00:08:27.421 "large_pool_count": 1024, 00:08:27.421 "small_bufsize": 8192, 00:08:27.421 "large_bufsize": 135168 00:08:27.421 } 00:08:27.421 } 00:08:27.421 ] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "sock", 00:08:27.421 "config": [ 00:08:27.421 { 00:08:27.421 "method": "sock_set_default_impl", 00:08:27.421 "params": { 00:08:27.421 "impl_name": "posix" 00:08:27.421 } 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "method": "sock_impl_set_options", 00:08:27.421 "params": { 00:08:27.421 "impl_name": "ssl", 00:08:27.421 "recv_buf_size": 4096, 00:08:27.421 "send_buf_size": 4096, 00:08:27.421 "enable_recv_pipe": true, 00:08:27.421 "enable_quickack": false, 00:08:27.421 "enable_placement_id": 0, 00:08:27.421 "enable_zerocopy_send_server": true, 00:08:27.421 "enable_zerocopy_send_client": false, 00:08:27.421 "zerocopy_threshold": 0, 00:08:27.421 "tls_version": 0, 00:08:27.421 "enable_ktls": false 00:08:27.421 } 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "method": "sock_impl_set_options", 00:08:27.421 "params": { 00:08:27.421 "impl_name": "posix", 00:08:27.421 "recv_buf_size": 2097152, 00:08:27.421 "send_buf_size": 2097152, 00:08:27.421 "enable_recv_pipe": true, 00:08:27.421 "enable_quickack": false, 00:08:27.421 "enable_placement_id": 0, 00:08:27.421 "enable_zerocopy_send_server": true, 00:08:27.421 "enable_zerocopy_send_client": false, 00:08:27.421 "zerocopy_threshold": 0, 00:08:27.421 "tls_version": 0, 00:08:27.421 "enable_ktls": false 00:08:27.421 } 00:08:27.421 } 00:08:27.421 ] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "vmd", 00:08:27.421 "config": [] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "accel", 00:08:27.421 "config": [ 00:08:27.421 { 00:08:27.421 "method": "accel_set_options", 00:08:27.421 "params": { 00:08:27.421 "small_cache_size": 128, 00:08:27.421 "large_cache_size": 16, 00:08:27.421 "task_count": 2048, 00:08:27.421 "sequence_count": 2048, 00:08:27.421 "buf_count": 2048 00:08:27.421 } 00:08:27.421 } 00:08:27.421 ] 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "subsystem": "bdev", 00:08:27.421 "config": [ 00:08:27.421 { 00:08:27.421 "method": "bdev_set_options", 00:08:27.421 "params": { 00:08:27.421 "bdev_io_pool_size": 65535, 00:08:27.421 "bdev_io_cache_size": 256, 00:08:27.421 "bdev_auto_examine": true, 00:08:27.421 "iobuf_small_cache_size": 128, 00:08:27.421 "iobuf_large_cache_size": 16 00:08:27.421 } 00:08:27.421 }, 00:08:27.421 { 00:08:27.421 "method": "bdev_raid_set_options", 00:08:27.421 "params": { 00:08:27.421 "process_window_size_kb": 1024, 00:08:27.421 "process_max_bandwidth_mb_sec": 0 00:08:27.421 } 00:08:27.421 }, 00:08:27.422 { 00:08:27.422 "method": "bdev_iscsi_set_options", 00:08:27.422 "params": { 00:08:27.422 "timeout_sec": 30 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "bdev_nvme_set_options", 00:08:27.422 "params": { 00:08:27.422 "action_on_timeout": "none", 00:08:27.422 "timeout_us": 0, 00:08:27.422 "timeout_admin_us": 0, 00:08:27.422 "keep_alive_timeout_ms": 10000, 00:08:27.422 "arbitration_burst": 0, 00:08:27.422 "low_priority_weight": 0, 00:08:27.422 "medium_priority_weight": 0, 00:08:27.422 "high_priority_weight": 0, 00:08:27.422 "nvme_adminq_poll_period_us": 10000, 00:08:27.422 "nvme_ioq_poll_period_us": 0, 00:08:27.422 "io_queue_requests": 0, 00:08:27.422 "delay_cmd_submit": true, 00:08:27.422 "transport_retry_count": 4, 00:08:27.422 "bdev_retry_count": 3, 00:08:27.422 "transport_ack_timeout": 0, 00:08:27.422 "ctrlr_loss_timeout_sec": 0, 00:08:27.422 "reconnect_delay_sec": 0, 00:08:27.422 "fast_io_fail_timeout_sec": 0, 00:08:27.422 "disable_auto_failback": false, 00:08:27.422 "generate_uuids": false, 00:08:27.422 "transport_tos": 0, 00:08:27.422 "nvme_error_stat": false, 00:08:27.422 "rdma_srq_size": 0, 00:08:27.422 "io_path_stat": false, 00:08:27.422 "allow_accel_sequence": false, 00:08:27.422 "rdma_max_cq_size": 0, 00:08:27.422 "rdma_cm_event_timeout_ms": 0, 00:08:27.422 "dhchap_digests": [ 00:08:27.422 "sha256", 00:08:27.422 "sha384", 00:08:27.422 "sha512" 00:08:27.422 ], 00:08:27.422 "dhchap_dhgroups": [ 00:08:27.422 "null", 00:08:27.422 "ffdhe2048", 00:08:27.422 "ffdhe3072", 00:08:27.422 "ffdhe4096", 00:08:27.422 "ffdhe6144", 00:08:27.422 "ffdhe8192" 00:08:27.422 ] 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "bdev_nvme_set_hotplug", 00:08:27.422 "params": { 00:08:27.422 "period_us": 100000, 00:08:27.422 "enable": false 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "bdev_wait_for_examine" 00:08:27.422 } 00:08:27.422 ] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "scsi", 00:08:27.422 "config": null 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "scheduler", 00:08:27.422 "config": [ 00:08:27.422 { 00:08:27.422 "method": "framework_set_scheduler", 00:08:27.422 "params": { 00:08:27.422 "name": "static" 00:08:27.422 } 00:08:27.422 } 00:08:27.422 ] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "vhost_scsi", 00:08:27.422 "config": [] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "vhost_blk", 00:08:27.422 "config": [] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "ublk", 00:08:27.422 "config": [] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "nbd", 00:08:27.422 "config": [] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "nvmf", 00:08:27.422 "config": [ 00:08:27.422 { 00:08:27.422 "method": "nvmf_set_config", 00:08:27.422 "params": { 00:08:27.422 "discovery_filter": "match_any", 00:08:27.422 "admin_cmd_passthru": { 00:08:27.422 "identify_ctrlr": false 00:08:27.422 }, 00:08:27.422 "dhchap_digests": [ 00:08:27.422 "sha256", 00:08:27.422 "sha384", 00:08:27.422 "sha512" 00:08:27.422 ], 00:08:27.422 "dhchap_dhgroups": [ 00:08:27.422 "null", 00:08:27.422 "ffdhe2048", 00:08:27.422 "ffdhe3072", 00:08:27.422 "ffdhe4096", 00:08:27.422 "ffdhe6144", 00:08:27.422 "ffdhe8192" 00:08:27.422 ] 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "nvmf_set_max_subsystems", 00:08:27.422 "params": { 00:08:27.422 "max_subsystems": 1024 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "nvmf_set_crdt", 00:08:27.422 "params": { 00:08:27.422 "crdt1": 0, 00:08:27.422 "crdt2": 0, 00:08:27.422 "crdt3": 0 00:08:27.422 } 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "method": "nvmf_create_transport", 00:08:27.422 "params": { 00:08:27.422 "trtype": "TCP", 00:08:27.422 "max_queue_depth": 128, 00:08:27.422 "max_io_qpairs_per_ctrlr": 127, 00:08:27.422 "in_capsule_data_size": 4096, 00:08:27.422 "max_io_size": 131072, 00:08:27.422 "io_unit_size": 131072, 00:08:27.422 "max_aq_depth": 128, 00:08:27.422 "num_shared_buffers": 511, 00:08:27.422 "buf_cache_size": 4294967295, 00:08:27.422 "dif_insert_or_strip": false, 00:08:27.422 "zcopy": false, 00:08:27.422 "c2h_success": true, 00:08:27.422 "sock_priority": 0, 00:08:27.422 "abort_timeout_sec": 1, 00:08:27.422 "ack_timeout": 0, 00:08:27.422 "data_wr_pool_size": 0 00:08:27.422 } 00:08:27.422 } 00:08:27.422 ] 00:08:27.422 }, 00:08:27.422 { 00:08:27.422 "subsystem": "iscsi", 00:08:27.422 "config": [ 00:08:27.422 { 00:08:27.422 "method": "iscsi_set_options", 00:08:27.422 "params": { 00:08:27.422 "node_base": "iqn.2016-06.io.spdk", 00:08:27.422 "max_sessions": 128, 00:08:27.422 "max_connections_per_session": 2, 00:08:27.422 "max_queue_depth": 64, 00:08:27.422 "default_time2wait": 2, 00:08:27.422 "default_time2retain": 20, 00:08:27.422 "first_burst_length": 8192, 00:08:27.422 "immediate_data": true, 00:08:27.422 "allow_duplicated_isid": false, 00:08:27.422 "error_recovery_level": 0, 00:08:27.422 "nop_timeout": 60, 00:08:27.422 "nop_in_interval": 30, 00:08:27.422 "disable_chap": false, 00:08:27.422 "require_chap": false, 00:08:27.422 "mutual_chap": false, 00:08:27.422 "chap_group": 0, 00:08:27.422 "max_large_datain_per_connection": 64, 00:08:27.422 "max_r2t_per_connection": 4, 00:08:27.422 "pdu_pool_size": 36864, 00:08:27.422 "immediate_data_pool_size": 16384, 00:08:27.422 "data_out_pool_size": 2048 00:08:27.422 } 00:08:27.422 } 00:08:27.422 ] 00:08:27.422 } 00:08:27.422 ] 00:08:27.422 } 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 130178 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 130178 ']' 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 130178 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130178 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130178' 00:08:27.422 killing process with pid 130178 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 130178 00:08:27.422 17:24:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 130178 00:08:27.683 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=130516 00:08:27.683 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:27.683 17:24:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 130516 ']' 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130516' 00:08:33.020 killing process with pid 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 130516 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:33.020 00:08:33.020 real 0m6.573s 00:08:33.020 user 0m6.473s 00:08:33.020 sys 0m0.561s 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 ************************************ 00:08:33.020 END TEST skip_rpc_with_json 00:08:33.020 ************************************ 00:08:33.020 17:24:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 ************************************ 00:08:33.020 START TEST skip_rpc_with_delay 00:08:33.020 ************************************ 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:33.020 [2024-10-08 17:24:24.844799] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:33.020 [2024-10-08 17:24:24.844878] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.020 00:08:33.020 real 0m0.076s 00:08:33.020 user 0m0.055s 00:08:33.020 sys 0m0.020s 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.020 17:24:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 ************************************ 00:08:33.020 END TEST skip_rpc_with_delay 00:08:33.020 ************************************ 00:08:33.020 17:24:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:33.020 17:24:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:33.020 17:24:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.020 17:24:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 ************************************ 00:08:33.020 START TEST exit_on_failed_rpc_init 00:08:33.020 ************************************ 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=131577 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 131577 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 131577 ']' 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.020 17:24:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 [2024-10-08 17:24:24.997407] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:33.020 [2024-10-08 17:24:24.997463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131577 ] 00:08:33.282 [2024-10-08 17:24:25.077558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.282 [2024-10-08 17:24:25.138785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:33.854 17:24:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:34.116 [2024-10-08 17:24:25.870619] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:34.116 [2024-10-08 17:24:25.870681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131821 ] 00:08:34.116 [2024-10-08 17:24:25.949280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.116 [2024-10-08 17:24:26.013618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.116 [2024-10-08 17:24:26.013679] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:34.116 [2024-10-08 17:24:26.013689] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:34.116 [2024-10-08 17:24:26.013696] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 131577 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 131577 ']' 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 131577 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.116 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131577 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131577' 00:08:34.377 killing process with pid 131577 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 131577 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 131577 00:08:34.377 00:08:34.377 real 0m1.398s 00:08:34.377 user 0m1.671s 00:08:34.377 sys 0m0.396s 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.377 17:24:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:34.377 ************************************ 00:08:34.377 END TEST exit_on_failed_rpc_init 00:08:34.377 ************************************ 00:08:34.638 17:24:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:34.638 00:08:34.638 real 0m13.839s 00:08:34.638 user 0m13.443s 00:08:34.639 sys 0m1.588s 00:08:34.639 17:24:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.639 17:24:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.639 ************************************ 00:08:34.639 END TEST skip_rpc 00:08:34.639 ************************************ 00:08:34.639 17:24:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:34.639 17:24:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.639 17:24:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.639 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.639 ************************************ 00:08:34.639 START TEST rpc_client 00:08:34.639 ************************************ 00:08:34.639 17:24:26 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:34.639 * Looking for test storage... 00:08:34.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:34.639 17:24:26 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.639 17:24:26 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.639 17:24:26 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.900 17:24:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.900 --rc genhtml_branch_coverage=1 00:08:34.900 --rc genhtml_function_coverage=1 00:08:34.900 --rc genhtml_legend=1 00:08:34.900 --rc geninfo_all_blocks=1 00:08:34.900 --rc geninfo_unexecuted_blocks=1 00:08:34.900 00:08:34.900 ' 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.900 --rc genhtml_branch_coverage=1 00:08:34.900 --rc genhtml_function_coverage=1 00:08:34.900 --rc genhtml_legend=1 00:08:34.900 --rc geninfo_all_blocks=1 00:08:34.900 --rc geninfo_unexecuted_blocks=1 00:08:34.900 00:08:34.900 ' 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.900 --rc genhtml_branch_coverage=1 00:08:34.900 --rc genhtml_function_coverage=1 00:08:34.900 --rc genhtml_legend=1 00:08:34.900 --rc geninfo_all_blocks=1 00:08:34.900 --rc geninfo_unexecuted_blocks=1 00:08:34.900 00:08:34.900 ' 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.900 --rc genhtml_branch_coverage=1 00:08:34.900 --rc genhtml_function_coverage=1 00:08:34.900 --rc genhtml_legend=1 00:08:34.900 --rc geninfo_all_blocks=1 00:08:34.900 --rc geninfo_unexecuted_blocks=1 00:08:34.900 00:08:34.900 ' 00:08:34.900 17:24:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:34.900 OK 00:08:34.900 17:24:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:34.900 00:08:34.900 real 0m0.224s 00:08:34.900 user 0m0.131s 00:08:34.900 sys 0m0.105s 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.900 17:24:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:34.900 ************************************ 00:08:34.900 END TEST rpc_client 00:08:34.900 ************************************ 00:08:34.900 17:24:26 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:34.900 17:24:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.901 17:24:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.901 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.901 ************************************ 00:08:34.901 START TEST json_config 00:08:34.901 ************************************ 00:08:34.901 17:24:26 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:34.901 17:24:26 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.901 17:24:26 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.901 17:24:26 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.164 17:24:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.164 17:24:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.164 17:24:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.164 17:24:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.164 17:24:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.164 17:24:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:35.164 17:24:26 json_config -- scripts/common.sh@345 -- # : 1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.164 17:24:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.164 17:24:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@353 -- # local d=1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.164 17:24:26 json_config -- scripts/common.sh@355 -- # echo 1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.164 17:24:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@353 -- # local d=2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.164 17:24:26 json_config -- scripts/common.sh@355 -- # echo 2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.164 17:24:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.164 17:24:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.164 17:24:26 json_config -- scripts/common.sh@368 -- # return 0 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:35.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.164 --rc genhtml_branch_coverage=1 00:08:35.164 --rc genhtml_function_coverage=1 00:08:35.164 --rc genhtml_legend=1 00:08:35.164 --rc geninfo_all_blocks=1 00:08:35.164 --rc geninfo_unexecuted_blocks=1 00:08:35.164 00:08:35.164 ' 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:35.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.164 --rc genhtml_branch_coverage=1 00:08:35.164 --rc genhtml_function_coverage=1 00:08:35.164 --rc genhtml_legend=1 00:08:35.164 --rc geninfo_all_blocks=1 00:08:35.164 --rc geninfo_unexecuted_blocks=1 00:08:35.164 00:08:35.164 ' 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:35.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.164 --rc genhtml_branch_coverage=1 00:08:35.164 --rc genhtml_function_coverage=1 00:08:35.164 --rc genhtml_legend=1 00:08:35.164 --rc geninfo_all_blocks=1 00:08:35.164 --rc geninfo_unexecuted_blocks=1 00:08:35.164 00:08:35.164 ' 00:08:35.164 17:24:26 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:35.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.164 --rc genhtml_branch_coverage=1 00:08:35.164 --rc genhtml_function_coverage=1 00:08:35.164 --rc genhtml_legend=1 00:08:35.164 --rc geninfo_all_blocks=1 00:08:35.164 --rc geninfo_unexecuted_blocks=1 00:08:35.164 00:08:35.164 ' 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.164 17:24:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.164 17:24:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.164 17:24:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.164 17:24:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.164 17:24:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.164 17:24:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.164 17:24:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.164 17:24:26 json_config -- paths/export.sh@5 -- # export PATH 00:08:35.164 17:24:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@51 -- # : 0 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.164 17:24:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:35.164 17:24:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:35.165 INFO: JSON configuration test init 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 17:24:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:35.165 17:24:26 json_config -- json_config/common.sh@9 -- # local app=target 00:08:35.165 17:24:26 json_config -- json_config/common.sh@10 -- # shift 00:08:35.165 17:24:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:35.165 17:24:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:35.165 17:24:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:35.165 17:24:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.165 17:24:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.165 17:24:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=132054 00:08:35.165 17:24:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:35.165 Waiting for target to run... 00:08:35.165 17:24:26 json_config -- json_config/common.sh@25 -- # waitforlisten 132054 /var/tmp/spdk_tgt.sock 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 132054 ']' 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.165 17:24:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:35.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.165 17:24:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 [2024-10-08 17:24:27.032057] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:35.165 [2024-10-08 17:24:27.032126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132054 ] 00:08:35.427 [2024-10-08 17:24:27.313369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.427 [2024-10-08 17:24:27.355824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:35.999 17:24:27 json_config -- json_config/common.sh@26 -- # echo '' 00:08:35.999 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.999 17:24:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:35.999 17:24:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:35.999 17:24:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:36.571 17:24:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.571 17:24:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:36.571 17:24:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:36.571 17:24:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@54 -- # sort 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:36.832 17:24:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.832 17:24:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:36.832 17:24:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.832 17:24:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:36.832 17:24:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:36.832 17:24:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:36.832 MallocForNvmf0 00:08:37.093 17:24:28 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:37.093 17:24:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:37.093 MallocForNvmf1 00:08:37.093 17:24:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:37.093 17:24:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:37.354 [2024-10-08 17:24:29.149295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.354 17:24:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.354 17:24:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.354 17:24:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:37.354 17:24:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:37.616 17:24:29 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:37.616 17:24:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:37.878 17:24:29 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:37.878 17:24:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:37.878 [2024-10-08 17:24:29.803306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:37.878 17:24:29 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:37.878 17:24:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.878 17:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.878 17:24:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:37.878 17:24:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.878 17:24:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.139 17:24:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:38.139 17:24:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:38.139 17:24:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:38.139 MallocBdevForConfigChangeCheck 00:08:38.139 17:24:30 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:38.139 17:24:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.139 17:24:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.139 17:24:30 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:38.139 17:24:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:38.711 17:24:30 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:38.711 INFO: shutting down applications... 00:08:38.711 17:24:30 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:38.711 17:24:30 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:38.711 17:24:30 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:38.711 17:24:30 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:38.972 Calling clear_iscsi_subsystem 00:08:38.972 Calling clear_nvmf_subsystem 00:08:38.972 Calling clear_nbd_subsystem 00:08:38.972 Calling clear_ublk_subsystem 00:08:38.972 Calling clear_vhost_blk_subsystem 00:08:38.972 Calling clear_vhost_scsi_subsystem 00:08:38.972 Calling clear_bdev_subsystem 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:38.972 17:24:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:39.233 17:24:31 json_config -- json_config/json_config.sh@352 -- # break 00:08:39.233 17:24:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:39.233 17:24:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:39.233 17:24:31 json_config -- json_config/common.sh@31 -- # local app=target 00:08:39.233 17:24:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:39.233 17:24:31 json_config -- json_config/common.sh@35 -- # [[ -n 132054 ]] 00:08:39.233 17:24:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 132054 00:08:39.233 17:24:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:39.233 17:24:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.233 17:24:31 json_config -- json_config/common.sh@41 -- # kill -0 132054 00:08:39.233 17:24:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:39.807 17:24:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:39.807 17:24:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.807 17:24:31 json_config -- json_config/common.sh@41 -- # kill -0 132054 00:08:39.807 17:24:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:39.807 17:24:31 json_config -- json_config/common.sh@43 -- # break 00:08:39.807 17:24:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:39.807 17:24:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:39.807 SPDK target shutdown done 00:08:39.807 17:24:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:39.807 INFO: relaunching applications... 00:08:39.807 17:24:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:39.807 17:24:31 json_config -- json_config/common.sh@9 -- # local app=target 00:08:39.807 17:24:31 json_config -- json_config/common.sh@10 -- # shift 00:08:39.807 17:24:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:39.807 17:24:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:39.807 17:24:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:39.807 17:24:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:39.807 17:24:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:39.807 17:24:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=133189 00:08:39.807 17:24:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:39.807 Waiting for target to run... 00:08:39.807 17:24:31 json_config -- json_config/common.sh@25 -- # waitforlisten 133189 /var/tmp/spdk_tgt.sock 00:08:39.807 17:24:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 133189 ']' 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:39.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.807 17:24:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 [2024-10-08 17:24:31.768384] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:39.807 [2024-10-08 17:24:31.768470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133189 ] 00:08:40.069 [2024-10-08 17:24:32.031549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.330 [2024-10-08 17:24:32.078054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.592 [2024-10-08 17:24:32.575408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.853 [2024-10-08 17:24:32.607731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:40.853 17:24:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.853 17:24:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:40.853 17:24:32 json_config -- json_config/common.sh@26 -- # echo '' 00:08:40.853 00:08:40.853 17:24:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:40.853 17:24:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:40.853 INFO: Checking if target configuration is the same... 00:08:40.853 17:24:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:40.853 17:24:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:40.853 17:24:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:40.853 + '[' 2 -ne 2 ']' 00:08:40.853 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:40.853 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:40.853 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:40.853 +++ basename /dev/fd/62 00:08:40.853 ++ mktemp /tmp/62.XXX 00:08:40.853 + tmp_file_1=/tmp/62.4Mt 00:08:40.853 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:40.853 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:40.853 + tmp_file_2=/tmp/spdk_tgt_config.json.aOa 00:08:40.853 + ret=0 00:08:40.853 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:41.113 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:41.113 + diff -u /tmp/62.4Mt /tmp/spdk_tgt_config.json.aOa 00:08:41.113 + echo 'INFO: JSON config files are the same' 00:08:41.113 INFO: JSON config files are the same 00:08:41.113 + rm /tmp/62.4Mt /tmp/spdk_tgt_config.json.aOa 00:08:41.113 + exit 0 00:08:41.113 17:24:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:41.113 17:24:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:41.113 INFO: changing configuration and checking if this can be detected... 00:08:41.113 17:24:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:41.113 17:24:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:41.373 17:24:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:41.373 17:24:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:41.373 17:24:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:41.373 + '[' 2 -ne 2 ']' 00:08:41.373 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:41.374 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:41.374 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:41.374 +++ basename /dev/fd/62 00:08:41.374 ++ mktemp /tmp/62.XXX 00:08:41.374 + tmp_file_1=/tmp/62.iHB 00:08:41.374 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:41.374 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:41.374 + tmp_file_2=/tmp/spdk_tgt_config.json.58D 00:08:41.374 + ret=0 00:08:41.374 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:41.634 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:41.634 + diff -u /tmp/62.iHB /tmp/spdk_tgt_config.json.58D 00:08:41.634 + ret=1 00:08:41.634 + echo '=== Start of file: /tmp/62.iHB ===' 00:08:41.634 + cat /tmp/62.iHB 00:08:41.634 + echo '=== End of file: /tmp/62.iHB ===' 00:08:41.634 + echo '' 00:08:41.634 + echo '=== Start of file: /tmp/spdk_tgt_config.json.58D ===' 00:08:41.634 + cat /tmp/spdk_tgt_config.json.58D 00:08:41.634 + echo '=== End of file: /tmp/spdk_tgt_config.json.58D ===' 00:08:41.634 + echo '' 00:08:41.634 + rm /tmp/62.iHB /tmp/spdk_tgt_config.json.58D 00:08:41.634 + exit 1 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:41.634 INFO: configuration change detected. 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 133189 ]] 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:41.634 17:24:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.634 17:24:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:41.896 17:24:33 json_config -- json_config/json_config.sh@330 -- # killprocess 133189 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@950 -- # '[' -z 133189 ']' 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@954 -- # kill -0 133189 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@955 -- # uname 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133189 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133189' 00:08:41.896 killing process with pid 133189 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@969 -- # kill 133189 00:08:41.896 17:24:33 json_config -- common/autotest_common.sh@974 -- # wait 133189 00:08:42.158 17:24:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:42.158 17:24:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:42.158 17:24:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.158 17:24:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.158 17:24:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:42.158 17:24:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:42.158 INFO: Success 00:08:42.158 00:08:42.158 real 0m7.274s 00:08:42.158 user 0m8.804s 00:08:42.158 sys 0m1.927s 00:08:42.158 17:24:34 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.158 17:24:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.158 ************************************ 00:08:42.158 END TEST json_config 00:08:42.158 ************************************ 00:08:42.158 17:24:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:42.158 17:24:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.158 17:24:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.158 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:08:42.158 ************************************ 00:08:42.158 START TEST json_config_extra_key 00:08:42.158 ************************************ 00:08:42.158 17:24:34 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.421 --rc genhtml_branch_coverage=1 00:08:42.421 --rc genhtml_function_coverage=1 00:08:42.421 --rc genhtml_legend=1 00:08:42.421 --rc geninfo_all_blocks=1 00:08:42.421 --rc geninfo_unexecuted_blocks=1 00:08:42.421 00:08:42.421 ' 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.421 --rc genhtml_branch_coverage=1 00:08:42.421 --rc genhtml_function_coverage=1 00:08:42.421 --rc genhtml_legend=1 00:08:42.421 --rc geninfo_all_blocks=1 00:08:42.421 --rc geninfo_unexecuted_blocks=1 00:08:42.421 00:08:42.421 ' 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.421 --rc genhtml_branch_coverage=1 00:08:42.421 --rc genhtml_function_coverage=1 00:08:42.421 --rc genhtml_legend=1 00:08:42.421 --rc geninfo_all_blocks=1 00:08:42.421 --rc geninfo_unexecuted_blocks=1 00:08:42.421 00:08:42.421 ' 00:08:42.421 17:24:34 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.421 --rc genhtml_branch_coverage=1 00:08:42.421 --rc genhtml_function_coverage=1 00:08:42.421 --rc genhtml_legend=1 00:08:42.421 --rc geninfo_all_blocks=1 00:08:42.421 --rc geninfo_unexecuted_blocks=1 00:08:42.421 00:08:42.421 ' 00:08:42.421 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.421 17:24:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.421 17:24:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 17:24:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 17:24:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 17:24:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:42.421 17:24:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.421 17:24:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.421 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:42.421 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:42.421 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:42.422 INFO: launching applications... 00:08:42.422 17:24:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=133930 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:42.422 Waiting for target to run... 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 133930 /var/tmp/spdk_tgt.sock 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 133930 ']' 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:42.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:42.422 17:24:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.422 17:24:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:42.422 [2024-10-08 17:24:34.373011] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:42.422 [2024-10-08 17:24:34.373087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133930 ] 00:08:42.683 [2024-10-08 17:24:34.666624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.944 [2024-10-08 17:24:34.708467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.205 17:24:35 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.205 17:24:35 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:43.205 00:08:43.205 17:24:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:43.205 INFO: shutting down applications... 00:08:43.205 17:24:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 133930 ]] 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 133930 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 133930 00:08:43.205 17:24:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:43.777 17:24:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:43.777 17:24:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:43.777 17:24:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 133930 00:08:43.778 17:24:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:43.778 17:24:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:43.778 17:24:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:43.778 17:24:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:43.778 SPDK target shutdown done 00:08:43.778 17:24:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:43.778 Success 00:08:43.778 00:08:43.778 real 0m1.565s 00:08:43.778 user 0m1.162s 00:08:43.778 sys 0m0.436s 00:08:43.778 17:24:35 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.778 17:24:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:43.778 ************************************ 00:08:43.778 END TEST json_config_extra_key 00:08:43.778 ************************************ 00:08:43.778 17:24:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:43.778 17:24:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.778 17:24:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.778 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:08:43.778 ************************************ 00:08:43.778 START TEST alias_rpc 00:08:43.778 ************************************ 00:08:43.778 17:24:35 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:44.041 * Looking for test storage... 00:08:44.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:44.041 17:24:35 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:44.041 17:24:35 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:44.041 17:24:35 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:44.041 17:24:35 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.041 17:24:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.042 17:24:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.042 17:24:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.042 --rc genhtml_branch_coverage=1 00:08:44.042 --rc genhtml_function_coverage=1 00:08:44.042 --rc genhtml_legend=1 00:08:44.042 --rc geninfo_all_blocks=1 00:08:44.042 --rc geninfo_unexecuted_blocks=1 00:08:44.042 00:08:44.042 ' 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.042 --rc genhtml_branch_coverage=1 00:08:44.042 --rc genhtml_function_coverage=1 00:08:44.042 --rc genhtml_legend=1 00:08:44.042 --rc geninfo_all_blocks=1 00:08:44.042 --rc geninfo_unexecuted_blocks=1 00:08:44.042 00:08:44.042 ' 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.042 --rc genhtml_branch_coverage=1 00:08:44.042 --rc genhtml_function_coverage=1 00:08:44.042 --rc genhtml_legend=1 00:08:44.042 --rc geninfo_all_blocks=1 00:08:44.042 --rc geninfo_unexecuted_blocks=1 00:08:44.042 00:08:44.042 ' 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.042 --rc genhtml_branch_coverage=1 00:08:44.042 --rc genhtml_function_coverage=1 00:08:44.042 --rc genhtml_legend=1 00:08:44.042 --rc geninfo_all_blocks=1 00:08:44.042 --rc geninfo_unexecuted_blocks=1 00:08:44.042 00:08:44.042 ' 00:08:44.042 17:24:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:44.042 17:24:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=134283 00:08:44.042 17:24:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 134283 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 134283 ']' 00:08:44.042 17:24:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.042 17:24:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.042 [2024-10-08 17:24:36.001871] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:44.042 [2024-10-08 17:24:36.001952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134283 ] 00:08:44.304 [2024-10-08 17:24:36.084114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.304 [2024-10-08 17:24:36.146495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.876 17:24:36 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.876 17:24:36 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:44.876 17:24:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:45.137 17:24:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 134283 00:08:45.137 17:24:36 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 134283 ']' 00:08:45.137 17:24:36 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 134283 00:08:45.137 17:24:36 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:45.137 17:24:36 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.137 17:24:36 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134283 00:08:45.137 17:24:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.137 17:24:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.137 17:24:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134283' 00:08:45.137 killing process with pid 134283 00:08:45.137 17:24:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 134283 00:08:45.137 17:24:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 134283 00:08:45.399 00:08:45.399 real 0m1.508s 00:08:45.399 user 0m1.621s 00:08:45.399 sys 0m0.452s 00:08:45.399 17:24:37 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.399 17:24:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.399 ************************************ 00:08:45.399 END TEST alias_rpc 00:08:45.399 ************************************ 00:08:45.399 17:24:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:45.399 17:24:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:45.399 17:24:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.399 17:24:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.399 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:45.399 ************************************ 00:08:45.399 START TEST spdkcli_tcp 00:08:45.399 ************************************ 00:08:45.399 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:45.661 * Looking for test storage... 00:08:45.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:45.661 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:45.661 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:45.661 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:45.661 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:45.661 17:24:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.662 17:24:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.662 --rc genhtml_branch_coverage=1 00:08:45.662 --rc genhtml_function_coverage=1 00:08:45.662 --rc genhtml_legend=1 00:08:45.662 --rc geninfo_all_blocks=1 00:08:45.662 --rc geninfo_unexecuted_blocks=1 00:08:45.662 00:08:45.662 ' 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.662 --rc genhtml_branch_coverage=1 00:08:45.662 --rc genhtml_function_coverage=1 00:08:45.662 --rc genhtml_legend=1 00:08:45.662 --rc geninfo_all_blocks=1 00:08:45.662 --rc geninfo_unexecuted_blocks=1 00:08:45.662 00:08:45.662 ' 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.662 --rc genhtml_branch_coverage=1 00:08:45.662 --rc genhtml_function_coverage=1 00:08:45.662 --rc genhtml_legend=1 00:08:45.662 --rc geninfo_all_blocks=1 00:08:45.662 --rc geninfo_unexecuted_blocks=1 00:08:45.662 00:08:45.662 ' 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.662 --rc genhtml_branch_coverage=1 00:08:45.662 --rc genhtml_function_coverage=1 00:08:45.662 --rc genhtml_legend=1 00:08:45.662 --rc geninfo_all_blocks=1 00:08:45.662 --rc geninfo_unexecuted_blocks=1 00:08:45.662 00:08:45.662 ' 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=134623 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 134623 00:08:45.662 17:24:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 134623 ']' 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.662 17:24:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 [2024-10-08 17:24:37.601295] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:45.662 [2024-10-08 17:24:37.601369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134623 ] 00:08:45.924 [2024-10-08 17:24:37.683650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.924 [2024-10-08 17:24:37.755102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.924 [2024-10-08 17:24:37.755120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.498 17:24:38 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.498 17:24:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:46.498 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=134782 00:08:46.498 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:46.498 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:46.759 [ 00:08:46.759 "bdev_malloc_delete", 00:08:46.759 "bdev_malloc_create", 00:08:46.759 "bdev_null_resize", 00:08:46.759 "bdev_null_delete", 00:08:46.759 "bdev_null_create", 00:08:46.759 "bdev_nvme_cuse_unregister", 00:08:46.759 "bdev_nvme_cuse_register", 00:08:46.759 "bdev_opal_new_user", 00:08:46.759 "bdev_opal_set_lock_state", 00:08:46.759 "bdev_opal_delete", 00:08:46.759 "bdev_opal_get_info", 00:08:46.759 "bdev_opal_create", 00:08:46.759 "bdev_nvme_opal_revert", 00:08:46.759 "bdev_nvme_opal_init", 00:08:46.759 "bdev_nvme_send_cmd", 00:08:46.759 "bdev_nvme_set_keys", 00:08:46.759 "bdev_nvme_get_path_iostat", 00:08:46.759 "bdev_nvme_get_mdns_discovery_info", 00:08:46.759 "bdev_nvme_stop_mdns_discovery", 00:08:46.759 "bdev_nvme_start_mdns_discovery", 00:08:46.759 "bdev_nvme_set_multipath_policy", 00:08:46.759 "bdev_nvme_set_preferred_path", 00:08:46.759 "bdev_nvme_get_io_paths", 00:08:46.759 "bdev_nvme_remove_error_injection", 00:08:46.759 "bdev_nvme_add_error_injection", 00:08:46.759 "bdev_nvme_get_discovery_info", 00:08:46.759 "bdev_nvme_stop_discovery", 00:08:46.759 "bdev_nvme_start_discovery", 00:08:46.759 "bdev_nvme_get_controller_health_info", 00:08:46.759 "bdev_nvme_disable_controller", 00:08:46.759 "bdev_nvme_enable_controller", 00:08:46.759 "bdev_nvme_reset_controller", 00:08:46.759 "bdev_nvme_get_transport_statistics", 00:08:46.759 "bdev_nvme_apply_firmware", 00:08:46.759 "bdev_nvme_detach_controller", 00:08:46.759 "bdev_nvme_get_controllers", 00:08:46.759 "bdev_nvme_attach_controller", 00:08:46.759 "bdev_nvme_set_hotplug", 00:08:46.759 "bdev_nvme_set_options", 00:08:46.759 "bdev_passthru_delete", 00:08:46.759 "bdev_passthru_create", 00:08:46.759 "bdev_lvol_set_parent_bdev", 00:08:46.759 "bdev_lvol_set_parent", 00:08:46.759 "bdev_lvol_check_shallow_copy", 00:08:46.759 "bdev_lvol_start_shallow_copy", 00:08:46.759 "bdev_lvol_grow_lvstore", 00:08:46.759 "bdev_lvol_get_lvols", 00:08:46.759 "bdev_lvol_get_lvstores", 00:08:46.759 "bdev_lvol_delete", 00:08:46.759 "bdev_lvol_set_read_only", 00:08:46.759 "bdev_lvol_resize", 00:08:46.759 "bdev_lvol_decouple_parent", 00:08:46.759 "bdev_lvol_inflate", 00:08:46.760 "bdev_lvol_rename", 00:08:46.760 "bdev_lvol_clone_bdev", 00:08:46.760 "bdev_lvol_clone", 00:08:46.760 "bdev_lvol_snapshot", 00:08:46.760 "bdev_lvol_create", 00:08:46.760 "bdev_lvol_delete_lvstore", 00:08:46.760 "bdev_lvol_rename_lvstore", 00:08:46.760 "bdev_lvol_create_lvstore", 00:08:46.760 "bdev_raid_set_options", 00:08:46.760 "bdev_raid_remove_base_bdev", 00:08:46.760 "bdev_raid_add_base_bdev", 00:08:46.760 "bdev_raid_delete", 00:08:46.760 "bdev_raid_create", 00:08:46.760 "bdev_raid_get_bdevs", 00:08:46.760 "bdev_error_inject_error", 00:08:46.760 "bdev_error_delete", 00:08:46.760 "bdev_error_create", 00:08:46.760 "bdev_split_delete", 00:08:46.760 "bdev_split_create", 00:08:46.760 "bdev_delay_delete", 00:08:46.760 "bdev_delay_create", 00:08:46.760 "bdev_delay_update_latency", 00:08:46.760 "bdev_zone_block_delete", 00:08:46.760 "bdev_zone_block_create", 00:08:46.760 "blobfs_create", 00:08:46.760 "blobfs_detect", 00:08:46.760 "blobfs_set_cache_size", 00:08:46.760 "bdev_aio_delete", 00:08:46.760 "bdev_aio_rescan", 00:08:46.760 "bdev_aio_create", 00:08:46.760 "bdev_ftl_set_property", 00:08:46.760 "bdev_ftl_get_properties", 00:08:46.760 "bdev_ftl_get_stats", 00:08:46.760 "bdev_ftl_unmap", 00:08:46.760 "bdev_ftl_unload", 00:08:46.760 "bdev_ftl_delete", 00:08:46.760 "bdev_ftl_load", 00:08:46.760 "bdev_ftl_create", 00:08:46.760 "bdev_virtio_attach_controller", 00:08:46.760 "bdev_virtio_scsi_get_devices", 00:08:46.760 "bdev_virtio_detach_controller", 00:08:46.760 "bdev_virtio_blk_set_hotplug", 00:08:46.760 "bdev_iscsi_delete", 00:08:46.760 "bdev_iscsi_create", 00:08:46.760 "bdev_iscsi_set_options", 00:08:46.760 "accel_error_inject_error", 00:08:46.760 "ioat_scan_accel_module", 00:08:46.760 "dsa_scan_accel_module", 00:08:46.760 "iaa_scan_accel_module", 00:08:46.760 "vfu_virtio_create_fs_endpoint", 00:08:46.760 "vfu_virtio_create_scsi_endpoint", 00:08:46.760 "vfu_virtio_scsi_remove_target", 00:08:46.760 "vfu_virtio_scsi_add_target", 00:08:46.760 "vfu_virtio_create_blk_endpoint", 00:08:46.760 "vfu_virtio_delete_endpoint", 00:08:46.760 "keyring_file_remove_key", 00:08:46.760 "keyring_file_add_key", 00:08:46.760 "keyring_linux_set_options", 00:08:46.760 "fsdev_aio_delete", 00:08:46.760 "fsdev_aio_create", 00:08:46.760 "iscsi_get_histogram", 00:08:46.760 "iscsi_enable_histogram", 00:08:46.760 "iscsi_set_options", 00:08:46.760 "iscsi_get_auth_groups", 00:08:46.760 "iscsi_auth_group_remove_secret", 00:08:46.760 "iscsi_auth_group_add_secret", 00:08:46.760 "iscsi_delete_auth_group", 00:08:46.760 "iscsi_create_auth_group", 00:08:46.760 "iscsi_set_discovery_auth", 00:08:46.760 "iscsi_get_options", 00:08:46.760 "iscsi_target_node_request_logout", 00:08:46.760 "iscsi_target_node_set_redirect", 00:08:46.760 "iscsi_target_node_set_auth", 00:08:46.760 "iscsi_target_node_add_lun", 00:08:46.760 "iscsi_get_stats", 00:08:46.760 "iscsi_get_connections", 00:08:46.760 "iscsi_portal_group_set_auth", 00:08:46.760 "iscsi_start_portal_group", 00:08:46.760 "iscsi_delete_portal_group", 00:08:46.760 "iscsi_create_portal_group", 00:08:46.760 "iscsi_get_portal_groups", 00:08:46.760 "iscsi_delete_target_node", 00:08:46.760 "iscsi_target_node_remove_pg_ig_maps", 00:08:46.760 "iscsi_target_node_add_pg_ig_maps", 00:08:46.760 "iscsi_create_target_node", 00:08:46.760 "iscsi_get_target_nodes", 00:08:46.760 "iscsi_delete_initiator_group", 00:08:46.760 "iscsi_initiator_group_remove_initiators", 00:08:46.760 "iscsi_initiator_group_add_initiators", 00:08:46.760 "iscsi_create_initiator_group", 00:08:46.760 "iscsi_get_initiator_groups", 00:08:46.760 "nvmf_set_crdt", 00:08:46.760 "nvmf_set_config", 00:08:46.760 "nvmf_set_max_subsystems", 00:08:46.760 "nvmf_stop_mdns_prr", 00:08:46.760 "nvmf_publish_mdns_prr", 00:08:46.760 "nvmf_subsystem_get_listeners", 00:08:46.760 "nvmf_subsystem_get_qpairs", 00:08:46.760 "nvmf_subsystem_get_controllers", 00:08:46.760 "nvmf_get_stats", 00:08:46.760 "nvmf_get_transports", 00:08:46.760 "nvmf_create_transport", 00:08:46.760 "nvmf_get_targets", 00:08:46.760 "nvmf_delete_target", 00:08:46.760 "nvmf_create_target", 00:08:46.760 "nvmf_subsystem_allow_any_host", 00:08:46.760 "nvmf_subsystem_set_keys", 00:08:46.760 "nvmf_subsystem_remove_host", 00:08:46.760 "nvmf_subsystem_add_host", 00:08:46.760 "nvmf_ns_remove_host", 00:08:46.760 "nvmf_ns_add_host", 00:08:46.760 "nvmf_subsystem_remove_ns", 00:08:46.760 "nvmf_subsystem_set_ns_ana_group", 00:08:46.760 "nvmf_subsystem_add_ns", 00:08:46.760 "nvmf_subsystem_listener_set_ana_state", 00:08:46.760 "nvmf_discovery_get_referrals", 00:08:46.760 "nvmf_discovery_remove_referral", 00:08:46.760 "nvmf_discovery_add_referral", 00:08:46.760 "nvmf_subsystem_remove_listener", 00:08:46.760 "nvmf_subsystem_add_listener", 00:08:46.760 "nvmf_delete_subsystem", 00:08:46.760 "nvmf_create_subsystem", 00:08:46.760 "nvmf_get_subsystems", 00:08:46.760 "env_dpdk_get_mem_stats", 00:08:46.760 "nbd_get_disks", 00:08:46.760 "nbd_stop_disk", 00:08:46.760 "nbd_start_disk", 00:08:46.760 "ublk_recover_disk", 00:08:46.760 "ublk_get_disks", 00:08:46.760 "ublk_stop_disk", 00:08:46.760 "ublk_start_disk", 00:08:46.760 "ublk_destroy_target", 00:08:46.760 "ublk_create_target", 00:08:46.760 "virtio_blk_create_transport", 00:08:46.760 "virtio_blk_get_transports", 00:08:46.760 "vhost_controller_set_coalescing", 00:08:46.760 "vhost_get_controllers", 00:08:46.760 "vhost_delete_controller", 00:08:46.760 "vhost_create_blk_controller", 00:08:46.760 "vhost_scsi_controller_remove_target", 00:08:46.760 "vhost_scsi_controller_add_target", 00:08:46.760 "vhost_start_scsi_controller", 00:08:46.760 "vhost_create_scsi_controller", 00:08:46.760 "thread_set_cpumask", 00:08:46.760 "scheduler_set_options", 00:08:46.760 "framework_get_governor", 00:08:46.760 "framework_get_scheduler", 00:08:46.760 "framework_set_scheduler", 00:08:46.760 "framework_get_reactors", 00:08:46.760 "thread_get_io_channels", 00:08:46.760 "thread_get_pollers", 00:08:46.760 "thread_get_stats", 00:08:46.760 "framework_monitor_context_switch", 00:08:46.760 "spdk_kill_instance", 00:08:46.760 "log_enable_timestamps", 00:08:46.760 "log_get_flags", 00:08:46.760 "log_clear_flag", 00:08:46.760 "log_set_flag", 00:08:46.760 "log_get_level", 00:08:46.760 "log_set_level", 00:08:46.760 "log_get_print_level", 00:08:46.760 "log_set_print_level", 00:08:46.760 "framework_enable_cpumask_locks", 00:08:46.760 "framework_disable_cpumask_locks", 00:08:46.760 "framework_wait_init", 00:08:46.760 "framework_start_init", 00:08:46.760 "scsi_get_devices", 00:08:46.760 "bdev_get_histogram", 00:08:46.760 "bdev_enable_histogram", 00:08:46.760 "bdev_set_qos_limit", 00:08:46.760 "bdev_set_qd_sampling_period", 00:08:46.760 "bdev_get_bdevs", 00:08:46.760 "bdev_reset_iostat", 00:08:46.760 "bdev_get_iostat", 00:08:46.760 "bdev_examine", 00:08:46.760 "bdev_wait_for_examine", 00:08:46.760 "bdev_set_options", 00:08:46.760 "accel_get_stats", 00:08:46.760 "accel_set_options", 00:08:46.760 "accel_set_driver", 00:08:46.760 "accel_crypto_key_destroy", 00:08:46.760 "accel_crypto_keys_get", 00:08:46.760 "accel_crypto_key_create", 00:08:46.760 "accel_assign_opc", 00:08:46.760 "accel_get_module_info", 00:08:46.760 "accel_get_opc_assignments", 00:08:46.760 "vmd_rescan", 00:08:46.760 "vmd_remove_device", 00:08:46.760 "vmd_enable", 00:08:46.760 "sock_get_default_impl", 00:08:46.760 "sock_set_default_impl", 00:08:46.760 "sock_impl_set_options", 00:08:46.760 "sock_impl_get_options", 00:08:46.760 "iobuf_get_stats", 00:08:46.760 "iobuf_set_options", 00:08:46.760 "keyring_get_keys", 00:08:46.760 "vfu_tgt_set_base_path", 00:08:46.760 "framework_get_pci_devices", 00:08:46.760 "framework_get_config", 00:08:46.760 "framework_get_subsystems", 00:08:46.760 "fsdev_set_opts", 00:08:46.760 "fsdev_get_opts", 00:08:46.760 "trace_get_info", 00:08:46.760 "trace_get_tpoint_group_mask", 00:08:46.760 "trace_disable_tpoint_group", 00:08:46.760 "trace_enable_tpoint_group", 00:08:46.760 "trace_clear_tpoint_mask", 00:08:46.760 "trace_set_tpoint_mask", 00:08:46.760 "notify_get_notifications", 00:08:46.760 "notify_get_types", 00:08:46.760 "spdk_get_version", 00:08:46.760 "rpc_get_methods" 00:08:46.760 ] 00:08:46.760 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.760 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:46.760 17:24:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 134623 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 134623 ']' 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 134623 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:46.760 17:24:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134623 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134623' 00:08:46.761 killing process with pid 134623 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 134623 00:08:46.761 17:24:38 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 134623 00:08:47.023 00:08:47.023 real 0m1.561s 00:08:47.023 user 0m2.783s 00:08:47.023 sys 0m0.500s 00:08:47.023 17:24:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.023 17:24:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.023 ************************************ 00:08:47.023 END TEST spdkcli_tcp 00:08:47.023 ************************************ 00:08:47.023 17:24:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:47.023 17:24:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.023 17:24:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.023 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.023 ************************************ 00:08:47.023 START TEST dpdk_mem_utility 00:08:47.024 ************************************ 00:08:47.024 17:24:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:47.286 * Looking for test storage... 00:08:47.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:47.286 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:47.286 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:08:47.286 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:47.286 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.286 17:24:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.287 17:24:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:47.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.287 --rc genhtml_branch_coverage=1 00:08:47.287 --rc genhtml_function_coverage=1 00:08:47.287 --rc genhtml_legend=1 00:08:47.287 --rc geninfo_all_blocks=1 00:08:47.287 --rc geninfo_unexecuted_blocks=1 00:08:47.287 00:08:47.287 ' 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:47.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.287 --rc genhtml_branch_coverage=1 00:08:47.287 --rc genhtml_function_coverage=1 00:08:47.287 --rc genhtml_legend=1 00:08:47.287 --rc geninfo_all_blocks=1 00:08:47.287 --rc geninfo_unexecuted_blocks=1 00:08:47.287 00:08:47.287 ' 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:47.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.287 --rc genhtml_branch_coverage=1 00:08:47.287 --rc genhtml_function_coverage=1 00:08:47.287 --rc genhtml_legend=1 00:08:47.287 --rc geninfo_all_blocks=1 00:08:47.287 --rc geninfo_unexecuted_blocks=1 00:08:47.287 00:08:47.287 ' 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:47.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.287 --rc genhtml_branch_coverage=1 00:08:47.287 --rc genhtml_function_coverage=1 00:08:47.287 --rc genhtml_legend=1 00:08:47.287 --rc geninfo_all_blocks=1 00:08:47.287 --rc geninfo_unexecuted_blocks=1 00:08:47.287 00:08:47.287 ' 00:08:47.287 17:24:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:47.287 17:24:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=134992 00:08:47.287 17:24:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 134992 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 134992 ']' 00:08:47.287 17:24:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.287 17:24:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:47.287 [2024-10-08 17:24:39.226668] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:47.287 [2024-10-08 17:24:39.226742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134992 ] 00:08:47.548 [2024-10-08 17:24:39.306360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.548 [2024-10-08 17:24:39.368633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.121 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.121 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:48.121 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:48.121 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:48.121 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.121 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:48.121 { 00:08:48.121 "filename": "/tmp/spdk_mem_dump.txt" 00:08:48.121 } 00:08:48.121 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.121 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:48.121 DPDK memory size 860.000000 MiB in 1 heap(s) 00:08:48.121 1 heaps totaling size 860.000000 MiB 00:08:48.121 size: 860.000000 MiB heap id: 0 00:08:48.121 end heaps---------- 00:08:48.121 9 mempools totaling size 642.649841 MiB 00:08:48.121 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:48.121 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:48.121 size: 92.545471 MiB name: bdev_io_134992 00:08:48.121 size: 51.011292 MiB name: evtpool_134992 00:08:48.121 size: 50.003479 MiB name: msgpool_134992 00:08:48.121 size: 36.509338 MiB name: fsdev_io_134992 00:08:48.121 size: 21.763794 MiB name: PDU_Pool 00:08:48.121 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:48.121 size: 0.026123 MiB name: Session_Pool 00:08:48.121 end mempools------- 00:08:48.121 6 memzones totaling size 4.142822 MiB 00:08:48.121 size: 1.000366 MiB name: RG_ring_0_134992 00:08:48.121 size: 1.000366 MiB name: RG_ring_1_134992 00:08:48.121 size: 1.000366 MiB name: RG_ring_4_134992 00:08:48.121 size: 1.000366 MiB name: RG_ring_5_134992 00:08:48.121 size: 0.125366 MiB name: RG_ring_2_134992 00:08:48.121 size: 0.015991 MiB name: RG_ring_3_134992 00:08:48.121 end memzones------- 00:08:48.121 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:48.121 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:08:48.121 list of free elements. size: 13.984680 MiB 00:08:48.121 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:48.121 element at address: 0x200000800000 with size: 1.996948 MiB 00:08:48.121 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:08:48.121 element at address: 0x20001be00000 with size: 0.999878 MiB 00:08:48.121 element at address: 0x200034a00000 with size: 0.994446 MiB 00:08:48.121 element at address: 0x200009600000 with size: 0.959839 MiB 00:08:48.121 element at address: 0x200015e00000 with size: 0.954285 MiB 00:08:48.121 element at address: 0x20001c000000 with size: 0.936584 MiB 00:08:48.121 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:48.121 element at address: 0x20001d800000 with size: 0.582886 MiB 00:08:48.121 element at address: 0x200003e00000 with size: 0.495422 MiB 00:08:48.121 element at address: 0x20000d800000 with size: 0.490723 MiB 00:08:48.121 element at address: 0x20001c200000 with size: 0.485657 MiB 00:08:48.121 element at address: 0x200007000000 with size: 0.481934 MiB 00:08:48.121 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:08:48.121 element at address: 0x200003a00000 with size: 0.355042 MiB 00:08:48.121 list of standard malloc elements. size: 199.218628 MiB 00:08:48.121 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:08:48.121 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:08:48.121 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:08:48.121 element at address: 0x20001befff80 with size: 1.000122 MiB 00:08:48.121 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:08:48.121 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:48.121 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:08:48.121 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:48.121 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:08:48.121 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003aff940 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003eff000 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20000707b600 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:08:48.121 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:08:48.121 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20001d895380 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20001d895440 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:08:48.121 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:08:48.121 list of memzone associated elements. size: 646.796692 MiB 00:08:48.121 element at address: 0x20001d895500 with size: 211.416748 MiB 00:08:48.121 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:48.121 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:08:48.121 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:48.121 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:08:48.121 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_134992_0 00:08:48.121 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:48.121 associated memzone info: size: 48.002930 MiB name: MP_evtpool_134992_0 00:08:48.121 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:48.121 associated memzone info: size: 48.002930 MiB name: MP_msgpool_134992_0 00:08:48.121 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:08:48.121 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_134992_0 00:08:48.121 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:08:48.121 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:48.121 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:08:48.121 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:48.121 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:48.121 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_134992 00:08:48.121 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:48.121 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_134992 00:08:48.121 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:48.121 associated memzone info: size: 1.007996 MiB name: MP_evtpool_134992 00:08:48.121 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:08:48.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:48.121 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:08:48.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:48.121 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:08:48.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:48.121 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:08:48.121 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:48.121 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:48.121 associated memzone info: size: 1.000366 MiB name: RG_ring_0_134992 00:08:48.121 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:48.121 associated memzone info: size: 1.000366 MiB name: RG_ring_1_134992 00:08:48.121 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:08:48.121 associated memzone info: size: 1.000366 MiB name: RG_ring_4_134992 00:08:48.121 element at address: 0x200034afe940 with size: 1.000488 MiB 00:08:48.121 associated memzone info: size: 1.000366 MiB name: RG_ring_5_134992 00:08:48.121 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:08:48.121 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_134992 00:08:48.121 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:08:48.121 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_134992 00:08:48.121 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:08:48.121 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:48.121 element at address: 0x20000707b780 with size: 0.500488 MiB 00:08:48.121 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:48.121 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:08:48.121 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:48.121 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:08:48.121 associated memzone info: size: 0.125366 MiB name: RG_ring_2_134992 00:08:48.121 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:08:48.121 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:48.122 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:08:48.122 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:48.122 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:08:48.122 associated memzone info: size: 0.015991 MiB name: RG_ring_3_134992 00:08:48.122 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:08:48.122 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:48.122 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:48.122 associated memzone info: size: 0.000183 MiB name: MP_msgpool_134992 00:08:48.122 element at address: 0x200003affa00 with size: 0.000305 MiB 00:08:48.122 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_134992 00:08:48.122 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:08:48.122 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_134992 00:08:48.122 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:08:48.122 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:48.383 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:48.383 17:24:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 134992 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 134992 ']' 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 134992 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134992 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134992' 00:08:48.383 killing process with pid 134992 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 134992 00:08:48.383 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 134992 00:08:48.645 00:08:48.645 real 0m1.424s 00:08:48.645 user 0m1.483s 00:08:48.645 sys 0m0.438s 00:08:48.645 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.645 17:24:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:48.645 ************************************ 00:08:48.645 END TEST dpdk_mem_utility 00:08:48.645 ************************************ 00:08:48.645 17:24:40 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:48.645 17:24:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.645 17:24:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.645 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.645 ************************************ 00:08:48.645 START TEST event 00:08:48.645 ************************************ 00:08:48.645 17:24:40 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:48.645 * Looking for test storage... 00:08:48.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:48.645 17:24:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.645 17:24:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.645 17:24:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.907 17:24:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.907 17:24:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.907 17:24:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.907 17:24:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.907 17:24:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.907 17:24:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.907 17:24:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.907 17:24:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.907 17:24:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.907 17:24:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.907 17:24:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.907 17:24:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.907 17:24:40 event -- scripts/common.sh@344 -- # case "$op" in 00:08:48.907 17:24:40 event -- scripts/common.sh@345 -- # : 1 00:08:48.907 17:24:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.907 17:24:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.907 17:24:40 event -- scripts/common.sh@365 -- # decimal 1 00:08:48.907 17:24:40 event -- scripts/common.sh@353 -- # local d=1 00:08:48.907 17:24:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.907 17:24:40 event -- scripts/common.sh@355 -- # echo 1 00:08:48.907 17:24:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.907 17:24:40 event -- scripts/common.sh@366 -- # decimal 2 00:08:48.907 17:24:40 event -- scripts/common.sh@353 -- # local d=2 00:08:48.907 17:24:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.907 17:24:40 event -- scripts/common.sh@355 -- # echo 2 00:08:48.907 17:24:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.907 17:24:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.907 17:24:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.907 17:24:40 event -- scripts/common.sh@368 -- # return 0 00:08:48.907 17:24:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.907 17:24:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.907 --rc genhtml_branch_coverage=1 00:08:48.907 --rc genhtml_function_coverage=1 00:08:48.907 --rc genhtml_legend=1 00:08:48.907 --rc geninfo_all_blocks=1 00:08:48.907 --rc geninfo_unexecuted_blocks=1 00:08:48.907 00:08:48.907 ' 00:08:48.907 17:24:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.907 --rc genhtml_branch_coverage=1 00:08:48.907 --rc genhtml_function_coverage=1 00:08:48.907 --rc genhtml_legend=1 00:08:48.907 --rc geninfo_all_blocks=1 00:08:48.907 --rc geninfo_unexecuted_blocks=1 00:08:48.907 00:08:48.907 ' 00:08:48.907 17:24:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.907 --rc genhtml_branch_coverage=1 00:08:48.907 --rc genhtml_function_coverage=1 00:08:48.907 --rc genhtml_legend=1 00:08:48.907 --rc geninfo_all_blocks=1 00:08:48.907 --rc geninfo_unexecuted_blocks=1 00:08:48.907 00:08:48.907 ' 00:08:48.908 17:24:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.908 --rc genhtml_branch_coverage=1 00:08:48.908 --rc genhtml_function_coverage=1 00:08:48.908 --rc genhtml_legend=1 00:08:48.908 --rc geninfo_all_blocks=1 00:08:48.908 --rc geninfo_unexecuted_blocks=1 00:08:48.908 00:08:48.908 ' 00:08:48.908 17:24:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:48.908 17:24:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:48.908 17:24:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:48.908 17:24:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:48.908 17:24:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.908 17:24:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:48.908 ************************************ 00:08:48.908 START TEST event_perf 00:08:48.908 ************************************ 00:08:48.908 17:24:40 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:48.908 Running I/O for 1 seconds...[2024-10-08 17:24:40.718103] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:48.908 [2024-10-08 17:24:40.718225] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135314 ] 00:08:48.908 [2024-10-08 17:24:40.800912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.908 [2024-10-08 17:24:40.866605] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.908 [2024-10-08 17:24:40.866759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.908 [2024-10-08 17:24:40.866907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.908 Running I/O for 1 seconds...[2024-10-08 17:24:40.866909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.305 00:08:50.305 lcore 0: 179619 00:08:50.305 lcore 1: 179622 00:08:50.305 lcore 2: 179621 00:08:50.305 lcore 3: 179617 00:08:50.305 done. 00:08:50.305 00:08:50.305 real 0m1.215s 00:08:50.305 user 0m4.125s 00:08:50.305 sys 0m0.087s 00:08:50.305 17:24:41 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.305 17:24:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:50.305 ************************************ 00:08:50.305 END TEST event_perf 00:08:50.305 ************************************ 00:08:50.305 17:24:41 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:50.305 17:24:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.305 17:24:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.305 17:24:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:50.305 ************************************ 00:08:50.305 START TEST event_reactor 00:08:50.305 ************************************ 00:08:50.305 17:24:41 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:50.305 [2024-10-08 17:24:42.005082] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:50.305 [2024-10-08 17:24:42.005171] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135625 ] 00:08:50.305 [2024-10-08 17:24:42.087349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.305 [2024-10-08 17:24:42.156042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.248 test_start 00:08:51.248 oneshot 00:08:51.248 tick 100 00:08:51.248 tick 100 00:08:51.248 tick 250 00:08:51.248 tick 100 00:08:51.248 tick 100 00:08:51.248 tick 100 00:08:51.248 tick 250 00:08:51.248 tick 500 00:08:51.248 tick 100 00:08:51.248 tick 100 00:08:51.248 tick 250 00:08:51.248 tick 100 00:08:51.248 tick 100 00:08:51.248 test_end 00:08:51.248 00:08:51.248 real 0m1.216s 00:08:51.248 user 0m1.124s 00:08:51.248 sys 0m0.087s 00:08:51.248 17:24:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.248 17:24:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:51.248 ************************************ 00:08:51.248 END TEST event_reactor 00:08:51.248 ************************************ 00:08:51.248 17:24:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:51.248 17:24:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:51.248 17:24:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.248 17:24:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.509 ************************************ 00:08:51.509 START TEST event_reactor_perf 00:08:51.509 ************************************ 00:08:51.509 17:24:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:51.509 [2024-10-08 17:24:43.293827] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:51.509 [2024-10-08 17:24:43.293910] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135973 ] 00:08:51.509 [2024-10-08 17:24:43.377617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.509 [2024-10-08 17:24:43.446415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.896 test_start 00:08:52.896 test_end 00:08:52.896 Performance: 537162 events per second 00:08:52.896 00:08:52.896 real 0m1.216s 00:08:52.896 user 0m1.124s 00:08:52.896 sys 0m0.088s 00:08:52.896 17:24:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.896 17:24:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:52.896 ************************************ 00:08:52.896 END TEST event_reactor_perf 00:08:52.896 ************************************ 00:08:52.896 17:24:44 event -- event/event.sh@49 -- # uname -s 00:08:52.896 17:24:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:52.896 17:24:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:52.896 17:24:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.896 17:24:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.896 17:24:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:52.896 ************************************ 00:08:52.896 START TEST event_scheduler 00:08:52.896 ************************************ 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:52.896 * Looking for test storage... 00:08:52.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.896 17:24:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.896 --rc genhtml_branch_coverage=1 00:08:52.896 --rc genhtml_function_coverage=1 00:08:52.896 --rc genhtml_legend=1 00:08:52.896 --rc geninfo_all_blocks=1 00:08:52.896 --rc geninfo_unexecuted_blocks=1 00:08:52.896 00:08:52.896 ' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.896 --rc genhtml_branch_coverage=1 00:08:52.896 --rc genhtml_function_coverage=1 00:08:52.896 --rc genhtml_legend=1 00:08:52.896 --rc geninfo_all_blocks=1 00:08:52.896 --rc geninfo_unexecuted_blocks=1 00:08:52.896 00:08:52.896 ' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.896 --rc genhtml_branch_coverage=1 00:08:52.896 --rc genhtml_function_coverage=1 00:08:52.896 --rc genhtml_legend=1 00:08:52.896 --rc geninfo_all_blocks=1 00:08:52.896 --rc geninfo_unexecuted_blocks=1 00:08:52.896 00:08:52.896 ' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.896 --rc genhtml_branch_coverage=1 00:08:52.896 --rc genhtml_function_coverage=1 00:08:52.896 --rc genhtml_legend=1 00:08:52.896 --rc geninfo_all_blocks=1 00:08:52.896 --rc geninfo_unexecuted_blocks=1 00:08:52.896 00:08:52.896 ' 00:08:52.896 17:24:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:52.896 17:24:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=136366 00:08:52.896 17:24:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.896 17:24:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 136366 00:08:52.896 17:24:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 136366 ']' 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.896 17:24:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:52.896 [2024-10-08 17:24:44.827189] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:52.896 [2024-10-08 17:24:44.827254] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136366 ] 00:08:53.158 [2024-10-08 17:24:44.894791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.158 [2024-10-08 17:24:44.987046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.158 [2024-10-08 17:24:44.987273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.158 [2024-10-08 17:24:44.987274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.158 [2024-10-08 17:24:44.987114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:53.731 17:24:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:53.731 [2024-10-08 17:24:45.641714] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:53.731 [2024-10-08 17:24:45.641733] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:53.731 [2024-10-08 17:24:45.641743] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:53.731 [2024-10-08 17:24:45.641749] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:53.731 [2024-10-08 17:24:45.641755] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.731 17:24:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:53.731 [2024-10-08 17:24:45.704040] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.731 17:24:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.731 17:24:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 ************************************ 00:08:53.993 START TEST scheduler_create_thread 00:08:53.993 ************************************ 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 2 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 3 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 4 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 5 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 6 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 7 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 8 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.993 17:24:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:54.567 9 00:08:54.567 17:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.567 17:24:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:54.567 17:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.567 17:24:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.510 10 00:08:55.510 17:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.511 17:24:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:55.511 17:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.511 17:24:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.454 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.454 17:24:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:56.454 17:24:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:56.454 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.454 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.026 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.026 17:24:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:57.026 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.026 17:24:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 17:24:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.969 17:24:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:57.969 17:24:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:57.969 17:24:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.969 17:24:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.231 17:24:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.231 00:08:58.231 real 0m4.465s 00:08:58.231 user 0m0.024s 00:08:58.231 sys 0m0.008s 00:08:58.231 17:24:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.231 17:24:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.231 ************************************ 00:08:58.231 END TEST scheduler_create_thread 00:08:58.231 ************************************ 00:08:58.492 17:24:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:58.492 17:24:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 136366 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 136366 ']' 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 136366 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136366 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136366' 00:08:58.492 killing process with pid 136366 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 136366 00:08:58.492 17:24:50 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 136366 00:08:58.753 [2024-10-08 17:24:50.486796] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:58.753 00:08:58.753 real 0m6.076s 00:08:58.753 user 0m14.356s 00:08:58.753 sys 0m0.413s 00:08:58.753 17:24:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.753 17:24:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:58.753 ************************************ 00:08:58.753 END TEST event_scheduler 00:08:58.753 ************************************ 00:08:58.753 17:24:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:58.753 17:24:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:58.753 17:24:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.753 17:24:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.753 17:24:50 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.753 ************************************ 00:08:58.753 START TEST app_repeat 00:08:58.753 ************************************ 00:08:58.753 17:24:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=137470 00:08:58.753 17:24:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.754 17:24:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:58.754 17:24:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 137470' 00:08:58.754 Process app_repeat pid: 137470 00:08:58.754 17:24:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:58.754 17:24:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:58.754 spdk_app_start Round 0 00:08:59.015 17:24:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 137470 /var/tmp/spdk-nbd.sock 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 137470 ']' 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.015 17:24:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.015 [2024-10-08 17:24:50.773161] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:08:59.015 [2024-10-08 17:24:50.773234] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137470 ] 00:08:59.015 [2024-10-08 17:24:50.855256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.015 [2024-10-08 17:24:50.926551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.015 [2024-10-08 17:24:50.926551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.961 17:24:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.961 17:24:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:59.961 17:24:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:59.961 Malloc0 00:08:59.961 17:24:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:00.222 Malloc1 00:09:00.222 17:24:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.222 17:24:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:00.222 /dev/nbd0 00:09:00.222 17:24:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.222 17:24:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.222 1+0 records in 00:09:00.222 1+0 records out 00:09:00.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210205 s, 19.5 MB/s 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:00.222 17:24:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:00.484 /dev/nbd1 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.484 1+0 records in 00:09:00.484 1+0 records out 00:09:00.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224586 s, 18.2 MB/s 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:00.484 17:24:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.484 17:24:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:00.745 { 00:09:00.745 "nbd_device": "/dev/nbd0", 00:09:00.745 "bdev_name": "Malloc0" 00:09:00.745 }, 00:09:00.745 { 00:09:00.745 "nbd_device": "/dev/nbd1", 00:09:00.745 "bdev_name": "Malloc1" 00:09:00.745 } 00:09:00.745 ]' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:00.745 { 00:09:00.745 "nbd_device": "/dev/nbd0", 00:09:00.745 "bdev_name": "Malloc0" 00:09:00.745 }, 00:09:00.745 { 00:09:00.745 "nbd_device": "/dev/nbd1", 00:09:00.745 "bdev_name": "Malloc1" 00:09:00.745 } 00:09:00.745 ]' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:00.745 /dev/nbd1' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:00.745 /dev/nbd1' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:00.745 256+0 records in 00:09:00.745 256+0 records out 00:09:00.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012633 s, 83.0 MB/s 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:00.745 256+0 records in 00:09:00.745 256+0 records out 00:09:00.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012647 s, 82.9 MB/s 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.745 17:24:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:01.008 256+0 records in 00:09:01.008 256+0 records out 00:09:01.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135446 s, 77.4 MB/s 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.008 17:24:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.269 17:24:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:01.530 17:24:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:01.530 17:24:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:01.790 17:24:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:01.790 [2024-10-08 17:24:53.700874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.790 [2024-10-08 17:24:53.752965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.790 [2024-10-08 17:24:53.752965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.790 [2024-10-08 17:24:53.782014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.790 [2024-10-08 17:24:53.782042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:05.091 17:24:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:05.091 17:24:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:05.091 spdk_app_start Round 1 00:09:05.091 17:24:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 137470 /var/tmp/spdk-nbd.sock 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 137470 ']' 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:05.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.091 17:24:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:05.091 17:24:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.091 Malloc0 00:09:05.091 17:24:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.352 Malloc1 00:09:05.352 17:24:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.352 17:24:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:05.352 /dev/nbd0 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.614 1+0 records in 00:09:05.614 1+0 records out 00:09:05.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218005 s, 18.8 MB/s 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:05.614 /dev/nbd1 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:05.614 17:24:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.614 1+0 records in 00:09:05.614 1+0 records out 00:09:05.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216762 s, 18.9 MB/s 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:05.614 17:24:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.876 17:24:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:05.876 17:24:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.876 { 00:09:05.876 "nbd_device": "/dev/nbd0", 00:09:05.876 "bdev_name": "Malloc0" 00:09:05.876 }, 00:09:05.876 { 00:09:05.876 "nbd_device": "/dev/nbd1", 00:09:05.876 "bdev_name": "Malloc1" 00:09:05.876 } 00:09:05.876 ]' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.876 { 00:09:05.876 "nbd_device": "/dev/nbd0", 00:09:05.876 "bdev_name": "Malloc0" 00:09:05.876 }, 00:09:05.876 { 00:09:05.876 "nbd_device": "/dev/nbd1", 00:09:05.876 "bdev_name": "Malloc1" 00:09:05.876 } 00:09:05.876 ]' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:05.876 /dev/nbd1' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:05.876 /dev/nbd1' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:05.876 256+0 records in 00:09:05.876 256+0 records out 00:09:05.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127575 s, 82.2 MB/s 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.876 17:24:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.137 256+0 records in 00:09:06.138 256+0 records out 00:09:06.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120832 s, 86.8 MB/s 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.138 256+0 records in 00:09:06.138 256+0 records out 00:09:06.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137509 s, 76.3 MB/s 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.138 17:24:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.138 17:24:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.399 17:24:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:06.659 17:24:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:06.659 17:24:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:06.920 17:24:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:06.920 [2024-10-08 17:24:58.807716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.920 [2024-10-08 17:24:58.860056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.920 [2024-10-08 17:24:58.860057] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.920 [2024-10-08 17:24:58.889734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:06.920 [2024-10-08 17:24:58.889764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:10.222 17:25:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:10.222 17:25:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:10.222 spdk_app_start Round 2 00:09:10.222 17:25:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 137470 /var/tmp/spdk-nbd.sock 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 137470 ']' 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.222 17:25:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:10.222 17:25:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.222 Malloc0 00:09:10.222 17:25:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.483 Malloc1 00:09:10.483 17:25:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.483 17:25:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.483 17:25:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.483 17:25:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:10.483 17:25:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:10.484 /dev/nbd0 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:10.484 1+0 records in 00:09:10.484 1+0 records out 00:09:10.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229081 s, 17.9 MB/s 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.484 17:25:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.484 17:25:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:10.745 /dev/nbd1 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:10.745 1+0 records in 00:09:10.745 1+0 records out 00:09:10.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279551 s, 14.7 MB/s 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.745 17:25:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.745 17:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:11.007 { 00:09:11.007 "nbd_device": "/dev/nbd0", 00:09:11.007 "bdev_name": "Malloc0" 00:09:11.007 }, 00:09:11.007 { 00:09:11.007 "nbd_device": "/dev/nbd1", 00:09:11.007 "bdev_name": "Malloc1" 00:09:11.007 } 00:09:11.007 ]' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:11.007 { 00:09:11.007 "nbd_device": "/dev/nbd0", 00:09:11.007 "bdev_name": "Malloc0" 00:09:11.007 }, 00:09:11.007 { 00:09:11.007 "nbd_device": "/dev/nbd1", 00:09:11.007 "bdev_name": "Malloc1" 00:09:11.007 } 00:09:11.007 ]' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:11.007 /dev/nbd1' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:11.007 /dev/nbd1' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:11.007 256+0 records in 00:09:11.007 256+0 records out 00:09:11.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118512 s, 88.5 MB/s 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:11.007 256+0 records in 00:09:11.007 256+0 records out 00:09:11.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122036 s, 85.9 MB/s 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:11.007 256+0 records in 00:09:11.007 256+0 records out 00:09:11.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138003 s, 76.0 MB/s 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:11.007 17:25:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:11.268 17:25:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:11.268 17:25:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.268 17:25:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.268 17:25:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:11.268 17:25:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.268 17:25:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.530 17:25:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:11.791 17:25:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:11.791 17:25:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:12.052 17:25:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:12.052 [2024-10-08 17:25:03.904800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.052 [2024-10-08 17:25:03.957528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.052 [2024-10-08 17:25:03.957529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.052 [2024-10-08 17:25:03.986497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:12.052 [2024-10-08 17:25:03.986526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:15.355 17:25:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 137470 /var/tmp/spdk-nbd.sock 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 137470 ']' 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:15.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:15.355 17:25:06 event.app_repeat -- event/event.sh@39 -- # killprocess 137470 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 137470 ']' 00:09:15.355 17:25:06 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 137470 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137470 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137470' 00:09:15.355 killing process with pid 137470 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 137470 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 137470 00:09:15.355 spdk_app_start is called in Round 0. 00:09:15.355 Shutdown signal received, stop current app iteration 00:09:15.355 Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 reinitialization... 00:09:15.355 spdk_app_start is called in Round 1. 00:09:15.355 Shutdown signal received, stop current app iteration 00:09:15.355 Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 reinitialization... 00:09:15.355 spdk_app_start is called in Round 2. 00:09:15.355 Shutdown signal received, stop current app iteration 00:09:15.355 Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 reinitialization... 00:09:15.355 spdk_app_start is called in Round 3. 00:09:15.355 Shutdown signal received, stop current app iteration 00:09:15.355 17:25:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:15.355 17:25:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:15.355 00:09:15.355 real 0m16.425s 00:09:15.355 user 0m35.914s 00:09:15.355 sys 0m2.269s 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.355 17:25:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:15.355 ************************************ 00:09:15.355 END TEST app_repeat 00:09:15.355 ************************************ 00:09:15.355 17:25:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:15.355 17:25:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:15.356 17:25:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.356 17:25:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.356 17:25:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:15.356 ************************************ 00:09:15.356 START TEST cpu_locks 00:09:15.356 ************************************ 00:09:15.356 17:25:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:15.356 * Looking for test storage... 00:09:15.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:15.356 17:25:07 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.356 17:25:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.356 17:25:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:15.617 17:25:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.617 17:25:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:15.617 17:25:07 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.617 17:25:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.617 --rc genhtml_branch_coverage=1 00:09:15.617 --rc genhtml_function_coverage=1 00:09:15.617 --rc genhtml_legend=1 00:09:15.617 --rc geninfo_all_blocks=1 00:09:15.617 --rc geninfo_unexecuted_blocks=1 00:09:15.617 00:09:15.617 ' 00:09:15.617 17:25:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.617 --rc genhtml_branch_coverage=1 00:09:15.617 --rc genhtml_function_coverage=1 00:09:15.617 --rc genhtml_legend=1 00:09:15.617 --rc geninfo_all_blocks=1 00:09:15.617 --rc geninfo_unexecuted_blocks=1 00:09:15.617 00:09:15.618 ' 00:09:15.618 17:25:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.618 --rc genhtml_branch_coverage=1 00:09:15.618 --rc genhtml_function_coverage=1 00:09:15.618 --rc genhtml_legend=1 00:09:15.618 --rc geninfo_all_blocks=1 00:09:15.618 --rc geninfo_unexecuted_blocks=1 00:09:15.618 00:09:15.618 ' 00:09:15.618 17:25:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.618 --rc genhtml_branch_coverage=1 00:09:15.618 --rc genhtml_function_coverage=1 00:09:15.618 --rc genhtml_legend=1 00:09:15.618 --rc geninfo_all_blocks=1 00:09:15.618 --rc geninfo_unexecuted_blocks=1 00:09:15.618 00:09:15.618 ' 00:09:15.618 17:25:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:15.618 17:25:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:15.618 17:25:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:15.618 17:25:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:15.618 17:25:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.618 17:25:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.618 17:25:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.618 ************************************ 00:09:15.618 START TEST default_locks 00:09:15.618 ************************************ 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=141037 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 141037 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 141037 ']' 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.618 17:25:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.618 [2024-10-08 17:25:07.532404] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:15.618 [2024-10-08 17:25:07.532464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141037 ] 00:09:15.880 [2024-10-08 17:25:07.614405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.880 [2024-10-08 17:25:07.677117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.454 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.454 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:16.454 17:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 141037 00:09:16.454 17:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 141037 00:09:16.454 17:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:17.027 lslocks: write error 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 141037 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 141037 ']' 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 141037 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141037 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141037' 00:09:17.027 killing process with pid 141037 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 141037 00:09:17.027 17:25:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 141037 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 141037 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 141037 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 141037 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 141037 ']' 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.288 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (141037) - No such process 00:09:17.289 ERROR: process (pid: 141037) is no longer running 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:17.289 00:09:17.289 real 0m1.635s 00:09:17.289 user 0m1.731s 00:09:17.289 sys 0m0.593s 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.289 17:25:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.289 ************************************ 00:09:17.289 END TEST default_locks 00:09:17.289 ************************************ 00:09:17.289 17:25:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:17.289 17:25:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.289 17:25:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.289 17:25:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.289 ************************************ 00:09:17.289 START TEST default_locks_via_rpc 00:09:17.289 ************************************ 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=141407 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 141407 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 141407 ']' 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.289 17:25:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.289 [2024-10-08 17:25:09.251593] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:17.289 [2024-10-08 17:25:09.251657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141407 ] 00:09:17.551 [2024-10-08 17:25:09.331161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.551 [2024-10-08 17:25:09.393685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.122 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 141407 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 141407 00:09:18.123 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 141407 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 141407 ']' 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 141407 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141407 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141407' 00:09:18.383 killing process with pid 141407 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 141407 00:09:18.383 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 141407 00:09:18.644 00:09:18.644 real 0m1.360s 00:09:18.644 user 0m1.464s 00:09:18.644 sys 0m0.458s 00:09:18.644 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.644 17:25:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.644 ************************************ 00:09:18.645 END TEST default_locks_via_rpc 00:09:18.645 ************************************ 00:09:18.645 17:25:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:18.645 17:25:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.645 17:25:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.645 17:25:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.645 ************************************ 00:09:18.645 START TEST non_locking_app_on_locked_coremask 00:09:18.645 ************************************ 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=141763 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 141763 /var/tmp/spdk.sock 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 141763 ']' 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.645 17:25:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:18.906 [2024-10-08 17:25:10.676857] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:18.906 [2024-10-08 17:25:10.676914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141763 ] 00:09:18.906 [2024-10-08 17:25:10.754777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.906 [2024-10-08 17:25:10.822560] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=142066 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 142066 /var/tmp/spdk2.sock 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 142066 ']' 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:19.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.478 17:25:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.739 [2024-10-08 17:25:11.529742] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:19.739 [2024-10-08 17:25:11.529796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142066 ] 00:09:19.739 [2024-10-08 17:25:11.600875] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:19.739 [2024-10-08 17:25:11.600895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.739 [2024-10-08 17:25:11.711534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.311 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.311 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:20.311 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 141763 00:09:20.311 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 141763 00:09:20.572 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:20.833 lslocks: write error 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 141763 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 141763 ']' 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 141763 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.833 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141763 00:09:21.095 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.095 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.095 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141763' 00:09:21.095 killing process with pid 141763 00:09:21.095 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 141763 00:09:21.095 17:25:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 141763 00:09:21.356 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 142066 00:09:21.356 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 142066 ']' 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 142066 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142066 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142066' 00:09:21.357 killing process with pid 142066 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 142066 00:09:21.357 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 142066 00:09:21.618 00:09:21.618 real 0m2.926s 00:09:21.618 user 0m3.251s 00:09:21.618 sys 0m0.921s 00:09:21.618 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.618 17:25:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 ************************************ 00:09:21.618 END TEST non_locking_app_on_locked_coremask 00:09:21.618 ************************************ 00:09:21.618 17:25:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:21.618 17:25:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.618 17:25:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.618 17:25:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:21.879 ************************************ 00:09:21.879 START TEST locking_app_on_unlocked_coremask 00:09:21.879 ************************************ 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=142469 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 142469 /var/tmp/spdk.sock 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 142469 ']' 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.879 17:25:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.879 [2024-10-08 17:25:13.677135] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:21.879 [2024-10-08 17:25:13.677187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142469 ] 00:09:21.879 [2024-10-08 17:25:13.754258] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:21.879 [2024-10-08 17:25:13.754278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.879 [2024-10-08 17:25:13.808336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=142587 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 142587 /var/tmp/spdk2.sock 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 142587 ']' 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:22.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.824 17:25:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-10-08 17:25:14.493986] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:22.824 [2024-10-08 17:25:14.494040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142587 ] 00:09:22.824 [2024-10-08 17:25:14.571111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.824 [2024-10-08 17:25:14.681886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.395 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.395 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:23.395 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 142587 00:09:23.395 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 142587 00:09:23.395 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:23.968 lslocks: write error 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 142469 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 142469 ']' 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 142469 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142469 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.968 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142469' 00:09:23.969 killing process with pid 142469 00:09:23.969 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 142469 00:09:23.969 17:25:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 142469 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 142587 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 142587 ']' 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 142587 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142587 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142587' 00:09:24.541 killing process with pid 142587 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 142587 00:09:24.541 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 142587 00:09:24.803 00:09:24.803 real 0m2.951s 00:09:24.803 user 0m3.247s 00:09:24.803 sys 0m0.933s 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 END TEST locking_app_on_unlocked_coremask 00:09:24.803 ************************************ 00:09:24.803 17:25:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:24.803 17:25:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.803 17:25:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.803 17:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 START TEST locking_app_on_locked_coremask 00:09:24.803 ************************************ 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=143174 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 143174 /var/tmp/spdk.sock 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 143174 ']' 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.803 17:25:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 [2024-10-08 17:25:16.700650] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:24.803 [2024-10-08 17:25:16.700703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143174 ] 00:09:24.803 [2024-10-08 17:25:16.777855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.065 [2024-10-08 17:25:16.832916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=143192 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 143192 /var/tmp/spdk2.sock 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 143192 /var/tmp/spdk2.sock 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 143192 /var/tmp/spdk2.sock 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 143192 ']' 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.637 17:25:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 [2024-10-08 17:25:17.555280] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:25.637 [2024-10-08 17:25:17.555330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143192 ] 00:09:25.637 [2024-10-08 17:25:17.629211] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 143174 has claimed it. 00:09:25.637 [2024-10-08 17:25:17.629245] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:26.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (143192) - No such process 00:09:26.209 ERROR: process (pid: 143192) is no longer running 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 143174 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 143174 00:09:26.209 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.780 lslocks: write error 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 143174 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 143174 ']' 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 143174 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.780 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143174 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143174' 00:09:27.041 killing process with pid 143174 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 143174 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 143174 00:09:27.041 00:09:27.041 real 0m2.349s 00:09:27.041 user 0m2.619s 00:09:27.041 sys 0m0.714s 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.041 17:25:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 ************************************ 00:09:27.041 END TEST locking_app_on_locked_coremask 00:09:27.041 ************************************ 00:09:27.041 17:25:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:27.042 17:25:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.042 17:25:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.042 17:25:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.303 ************************************ 00:09:27.303 START TEST locking_overlapped_coremask 00:09:27.303 ************************************ 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=143559 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 143559 /var/tmp/spdk.sock 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 143559 ']' 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.303 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.303 [2024-10-08 17:25:19.124326] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:27.303 [2024-10-08 17:25:19.124385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143559 ] 00:09:27.303 [2024-10-08 17:25:19.204423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.303 [2024-10-08 17:25:19.259813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.303 [2024-10-08 17:25:19.259965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.303 [2024-10-08 17:25:19.259968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=143873 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 143873 /var/tmp/spdk2.sock 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 143873 /var/tmp/spdk2.sock 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 143873 /var/tmp/spdk2.sock 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 143873 ']' 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.245 17:25:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.245 [2024-10-08 17:25:19.960575] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:28.245 [2024-10-08 17:25:19.960629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143873 ] 00:09:28.245 [2024-10-08 17:25:20.056389] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 143559 has claimed it. 00:09:28.245 [2024-10-08 17:25:20.056430] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:28.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (143873) - No such process 00:09:28.818 ERROR: process (pid: 143873) is no longer running 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 143559 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 143559 ']' 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 143559 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143559 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143559' 00:09:28.819 killing process with pid 143559 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 143559 00:09:28.819 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 143559 00:09:29.080 00:09:29.080 real 0m1.780s 00:09:29.080 user 0m5.056s 00:09:29.080 sys 0m0.393s 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.080 ************************************ 00:09:29.080 END TEST locking_overlapped_coremask 00:09:29.080 ************************************ 00:09:29.080 17:25:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:29.080 17:25:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.080 17:25:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.080 17:25:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.080 ************************************ 00:09:29.080 START TEST locking_overlapped_coremask_via_rpc 00:09:29.080 ************************************ 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=143935 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 143935 /var/tmp/spdk.sock 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 143935 ']' 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.080 17:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.080 [2024-10-08 17:25:20.990343] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:29.080 [2024-10-08 17:25:20.990399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143935 ] 00:09:29.080 [2024-10-08 17:25:21.069125] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.080 [2024-10-08 17:25:21.069151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.342 [2024-10-08 17:25:21.131022] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.342 [2024-10-08 17:25:21.131334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.342 [2024-10-08 17:25:21.131335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=144263 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 144263 /var/tmp/spdk2.sock 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 144263 ']' 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.914 17:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.914 [2024-10-08 17:25:21.833120] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:29.914 [2024-10-08 17:25:21.833172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144263 ] 00:09:30.176 [2024-10-08 17:25:21.926737] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:30.176 [2024-10-08 17:25:21.926763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.176 [2024-10-08 17:25:22.055736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.176 [2024-10-08 17:25:22.059098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.176 [2024-10-08 17:25:22.059099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.748 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.748 [2024-10-08 17:25:22.632054] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 143935 has claimed it. 00:09:30.748 request: 00:09:30.748 { 00:09:30.748 "method": "framework_enable_cpumask_locks", 00:09:30.748 "req_id": 1 00:09:30.748 } 00:09:30.748 Got JSON-RPC error response 00:09:30.748 response: 00:09:30.748 { 00:09:30.748 "code": -32603, 00:09:30.748 "message": "Failed to claim CPU core: 2" 00:09:30.748 } 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 143935 /var/tmp/spdk.sock 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 143935 ']' 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.749 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 144263 /var/tmp/spdk2.sock 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 144263 ']' 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.010 17:25:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:31.272 00:09:31.272 real 0m2.088s 00:09:31.272 user 0m0.869s 00:09:31.272 sys 0m0.142s 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.272 17:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 ************************************ 00:09:31.272 END TEST locking_overlapped_coremask_via_rpc 00:09:31.272 ************************************ 00:09:31.272 17:25:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:31.272 17:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 143935 ]] 00:09:31.272 17:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 143935 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 143935 ']' 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 143935 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143935 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143935' 00:09:31.272 killing process with pid 143935 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 143935 00:09:31.272 17:25:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 143935 00:09:31.533 17:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 144263 ]] 00:09:31.533 17:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 144263 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 144263 ']' 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 144263 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144263 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144263' 00:09:31.533 killing process with pid 144263 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 144263 00:09:31.533 17:25:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 144263 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 143935 ]] 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 143935 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 143935 ']' 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 143935 00:09:31.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (143935) - No such process 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 143935 is not found' 00:09:31.795 Process with pid 143935 is not found 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 144263 ]] 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 144263 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 144263 ']' 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 144263 00:09:31.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (144263) - No such process 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 144263 is not found' 00:09:31.795 Process with pid 144263 is not found 00:09:31.795 17:25:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:31.795 00:09:31.795 real 0m16.365s 00:09:31.795 user 0m28.197s 00:09:31.795 sys 0m5.102s 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.795 17:25:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.795 ************************************ 00:09:31.795 END TEST cpu_locks 00:09:31.795 ************************************ 00:09:31.795 00:09:31.795 real 0m43.185s 00:09:31.795 user 1m25.130s 00:09:31.795 sys 0m8.462s 00:09:31.795 17:25:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.795 17:25:23 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.795 ************************************ 00:09:31.795 END TEST event 00:09:31.795 ************************************ 00:09:31.795 17:25:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:31.795 17:25:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.795 17:25:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.795 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:09:31.795 ************************************ 00:09:31.795 START TEST thread 00:09:31.795 ************************************ 00:09:31.795 17:25:23 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:32.058 * Looking for test storage... 00:09:32.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:32.058 17:25:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.058 17:25:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.058 17:25:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.058 17:25:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.058 17:25:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.058 17:25:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.058 17:25:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.058 17:25:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.058 17:25:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.058 17:25:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.058 17:25:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.058 17:25:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:32.058 17:25:23 thread -- scripts/common.sh@345 -- # : 1 00:09:32.058 17:25:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.058 17:25:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.058 17:25:23 thread -- scripts/common.sh@365 -- # decimal 1 00:09:32.058 17:25:23 thread -- scripts/common.sh@353 -- # local d=1 00:09:32.058 17:25:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.058 17:25:23 thread -- scripts/common.sh@355 -- # echo 1 00:09:32.058 17:25:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.058 17:25:23 thread -- scripts/common.sh@366 -- # decimal 2 00:09:32.058 17:25:23 thread -- scripts/common.sh@353 -- # local d=2 00:09:32.058 17:25:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.058 17:25:23 thread -- scripts/common.sh@355 -- # echo 2 00:09:32.058 17:25:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.058 17:25:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.058 17:25:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.058 17:25:23 thread -- scripts/common.sh@368 -- # return 0 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:32.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.058 --rc genhtml_branch_coverage=1 00:09:32.058 --rc genhtml_function_coverage=1 00:09:32.058 --rc genhtml_legend=1 00:09:32.058 --rc geninfo_all_blocks=1 00:09:32.058 --rc geninfo_unexecuted_blocks=1 00:09:32.058 00:09:32.058 ' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:32.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.058 --rc genhtml_branch_coverage=1 00:09:32.058 --rc genhtml_function_coverage=1 00:09:32.058 --rc genhtml_legend=1 00:09:32.058 --rc geninfo_all_blocks=1 00:09:32.058 --rc geninfo_unexecuted_blocks=1 00:09:32.058 00:09:32.058 ' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:32.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.058 --rc genhtml_branch_coverage=1 00:09:32.058 --rc genhtml_function_coverage=1 00:09:32.058 --rc genhtml_legend=1 00:09:32.058 --rc geninfo_all_blocks=1 00:09:32.058 --rc geninfo_unexecuted_blocks=1 00:09:32.058 00:09:32.058 ' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:32.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.058 --rc genhtml_branch_coverage=1 00:09:32.058 --rc genhtml_function_coverage=1 00:09:32.058 --rc genhtml_legend=1 00:09:32.058 --rc geninfo_all_blocks=1 00:09:32.058 --rc geninfo_unexecuted_blocks=1 00:09:32.058 00:09:32.058 ' 00:09:32.058 17:25:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.058 17:25:23 thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.058 ************************************ 00:09:32.058 START TEST thread_poller_perf 00:09:32.058 ************************************ 00:09:32.058 17:25:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.058 [2024-10-08 17:25:23.980956] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:32.058 [2024-10-08 17:25:23.981069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144718 ] 00:09:32.319 [2024-10-08 17:25:24.060068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.319 [2024-10-08 17:25:24.116920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.319 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:33.262 [2024-10-08T15:25:25.254Z] ====================================== 00:09:33.262 [2024-10-08T15:25:25.254Z] busy:2405569100 (cyc) 00:09:33.262 [2024-10-08T15:25:25.254Z] total_run_count: 419000 00:09:33.262 [2024-10-08T15:25:25.254Z] tsc_hz: 2400000000 (cyc) 00:09:33.262 [2024-10-08T15:25:25.254Z] ====================================== 00:09:33.262 [2024-10-08T15:25:25.254Z] poller_cost: 5741 (cyc), 2392 (nsec) 00:09:33.262 00:09:33.262 real 0m1.207s 00:09:33.262 user 0m1.113s 00:09:33.262 sys 0m0.090s 00:09:33.263 17:25:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.263 17:25:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:33.263 ************************************ 00:09:33.263 END TEST thread_poller_perf 00:09:33.263 ************************************ 00:09:33.263 17:25:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:33.263 17:25:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:33.263 17:25:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.263 17:25:25 thread -- common/autotest_common.sh@10 -- # set +x 00:09:33.263 ************************************ 00:09:33.263 START TEST thread_poller_perf 00:09:33.263 ************************************ 00:09:33.263 17:25:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:33.523 [2024-10-08 17:25:25.262587] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:33.523 [2024-10-08 17:25:25.262668] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145068 ] 00:09:33.523 [2024-10-08 17:25:25.345220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.523 [2024-10-08 17:25:25.409340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.523 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:34.466 [2024-10-08T15:25:26.458Z] ====================================== 00:09:34.466 [2024-10-08T15:25:26.458Z] busy:2401664308 (cyc) 00:09:34.466 [2024-10-08T15:25:26.458Z] total_run_count: 5346000 00:09:34.466 [2024-10-08T15:25:26.458Z] tsc_hz: 2400000000 (cyc) 00:09:34.466 [2024-10-08T15:25:26.458Z] ====================================== 00:09:34.466 [2024-10-08T15:25:26.458Z] poller_cost: 449 (cyc), 187 (nsec) 00:09:34.466 00:09:34.466 real 0m1.213s 00:09:34.466 user 0m1.125s 00:09:34.466 sys 0m0.085s 00:09:34.466 17:25:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.466 17:25:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:34.466 ************************************ 00:09:34.466 END TEST thread_poller_perf 00:09:34.466 ************************************ 00:09:34.728 17:25:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:34.728 00:09:34.728 real 0m2.776s 00:09:34.728 user 0m2.411s 00:09:34.728 sys 0m0.376s 00:09:34.728 17:25:26 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.728 17:25:26 thread -- common/autotest_common.sh@10 -- # set +x 00:09:34.728 ************************************ 00:09:34.728 END TEST thread 00:09:34.728 ************************************ 00:09:34.728 17:25:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:34.728 17:25:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:34.728 17:25:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.728 17:25:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.728 17:25:26 -- common/autotest_common.sh@10 -- # set +x 00:09:34.728 ************************************ 00:09:34.728 START TEST app_cmdline 00:09:34.728 ************************************ 00:09:34.728 17:25:26 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:34.728 * Looking for test storage... 00:09:34.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:34.728 17:25:26 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.728 17:25:26 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.728 17:25:26 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.990 17:25:26 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.990 17:25:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.991 17:25:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.991 17:25:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.991 --rc genhtml_branch_coverage=1 00:09:34.991 --rc genhtml_function_coverage=1 00:09:34.991 --rc genhtml_legend=1 00:09:34.991 --rc geninfo_all_blocks=1 00:09:34.991 --rc geninfo_unexecuted_blocks=1 00:09:34.991 00:09:34.991 ' 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.991 --rc genhtml_branch_coverage=1 00:09:34.991 --rc genhtml_function_coverage=1 00:09:34.991 --rc genhtml_legend=1 00:09:34.991 --rc geninfo_all_blocks=1 00:09:34.991 --rc geninfo_unexecuted_blocks=1 00:09:34.991 00:09:34.991 ' 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.991 --rc genhtml_branch_coverage=1 00:09:34.991 --rc genhtml_function_coverage=1 00:09:34.991 --rc genhtml_legend=1 00:09:34.991 --rc geninfo_all_blocks=1 00:09:34.991 --rc geninfo_unexecuted_blocks=1 00:09:34.991 00:09:34.991 ' 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.991 --rc genhtml_branch_coverage=1 00:09:34.991 --rc genhtml_function_coverage=1 00:09:34.991 --rc genhtml_legend=1 00:09:34.991 --rc geninfo_all_blocks=1 00:09:34.991 --rc geninfo_unexecuted_blocks=1 00:09:34.991 00:09:34.991 ' 00:09:34.991 17:25:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:34.991 17:25:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=145464 00:09:34.991 17:25:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 145464 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 145464 ']' 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.991 17:25:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:34.991 17:25:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:34.991 [2024-10-08 17:25:26.840600] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:34.991 [2024-10-08 17:25:26.840672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145464 ] 00:09:34.991 [2024-10-08 17:25:26.919112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.991 [2024-10-08 17:25:26.974418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:35.934 { 00:09:35.934 "version": "SPDK v25.01-pre git sha1 52e9db722", 00:09:35.934 "fields": { 00:09:35.934 "major": 25, 00:09:35.934 "minor": 1, 00:09:35.934 "patch": 0, 00:09:35.934 "suffix": "-pre", 00:09:35.934 "commit": "52e9db722" 00:09:35.934 } 00:09:35.934 } 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:35.934 17:25:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:35.934 17:25:27 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:36.195 request: 00:09:36.195 { 00:09:36.195 "method": "env_dpdk_get_mem_stats", 00:09:36.195 "req_id": 1 00:09:36.195 } 00:09:36.195 Got JSON-RPC error response 00:09:36.195 response: 00:09:36.195 { 00:09:36.195 "code": -32601, 00:09:36.195 "message": "Method not found" 00:09:36.195 } 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.195 17:25:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 145464 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 145464 ']' 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 145464 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145464 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145464' 00:09:36.195 killing process with pid 145464 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@969 -- # kill 145464 00:09:36.195 17:25:28 app_cmdline -- common/autotest_common.sh@974 -- # wait 145464 00:09:36.459 00:09:36.459 real 0m1.735s 00:09:36.459 user 0m2.069s 00:09:36.459 sys 0m0.478s 00:09:36.459 17:25:28 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.459 17:25:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:36.459 ************************************ 00:09:36.459 END TEST app_cmdline 00:09:36.459 ************************************ 00:09:36.459 17:25:28 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:36.459 17:25:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.459 17:25:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.459 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.459 ************************************ 00:09:36.459 START TEST version 00:09:36.459 ************************************ 00:09:36.459 17:25:28 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:36.740 * Looking for test storage... 00:09:36.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:36.740 17:25:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.740 17:25:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.740 17:25:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.740 17:25:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.740 17:25:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.740 17:25:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.740 17:25:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.740 17:25:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.740 17:25:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.740 17:25:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.740 17:25:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.740 17:25:28 version -- scripts/common.sh@344 -- # case "$op" in 00:09:36.740 17:25:28 version -- scripts/common.sh@345 -- # : 1 00:09:36.740 17:25:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.740 17:25:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.740 17:25:28 version -- scripts/common.sh@365 -- # decimal 1 00:09:36.740 17:25:28 version -- scripts/common.sh@353 -- # local d=1 00:09:36.740 17:25:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.740 17:25:28 version -- scripts/common.sh@355 -- # echo 1 00:09:36.740 17:25:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.740 17:25:28 version -- scripts/common.sh@366 -- # decimal 2 00:09:36.740 17:25:28 version -- scripts/common.sh@353 -- # local d=2 00:09:36.740 17:25:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.740 17:25:28 version -- scripts/common.sh@355 -- # echo 2 00:09:36.740 17:25:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.740 17:25:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.740 17:25:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.740 17:25:28 version -- scripts/common.sh@368 -- # return 0 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:36.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.740 --rc genhtml_branch_coverage=1 00:09:36.740 --rc genhtml_function_coverage=1 00:09:36.740 --rc genhtml_legend=1 00:09:36.740 --rc geninfo_all_blocks=1 00:09:36.740 --rc geninfo_unexecuted_blocks=1 00:09:36.740 00:09:36.740 ' 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:36.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.740 --rc genhtml_branch_coverage=1 00:09:36.740 --rc genhtml_function_coverage=1 00:09:36.740 --rc genhtml_legend=1 00:09:36.740 --rc geninfo_all_blocks=1 00:09:36.740 --rc geninfo_unexecuted_blocks=1 00:09:36.740 00:09:36.740 ' 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:36.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.740 --rc genhtml_branch_coverage=1 00:09:36.740 --rc genhtml_function_coverage=1 00:09:36.740 --rc genhtml_legend=1 00:09:36.740 --rc geninfo_all_blocks=1 00:09:36.740 --rc geninfo_unexecuted_blocks=1 00:09:36.740 00:09:36.740 ' 00:09:36.740 17:25:28 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:36.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.740 --rc genhtml_branch_coverage=1 00:09:36.740 --rc genhtml_function_coverage=1 00:09:36.740 --rc genhtml_legend=1 00:09:36.740 --rc geninfo_all_blocks=1 00:09:36.740 --rc geninfo_unexecuted_blocks=1 00:09:36.740 00:09:36.740 ' 00:09:36.740 17:25:28 version -- app/version.sh@17 -- # get_header_version major 00:09:36.740 17:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:36.740 17:25:28 version -- app/version.sh@14 -- # cut -f2 00:09:36.740 17:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:36.740 17:25:28 version -- app/version.sh@17 -- # major=25 00:09:36.740 17:25:28 version -- app/version.sh@18 -- # get_header_version minor 00:09:36.741 17:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # cut -f2 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:36.741 17:25:28 version -- app/version.sh@18 -- # minor=1 00:09:36.741 17:25:28 version -- app/version.sh@19 -- # get_header_version patch 00:09:36.741 17:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # cut -f2 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:36.741 17:25:28 version -- app/version.sh@19 -- # patch=0 00:09:36.741 17:25:28 version -- app/version.sh@20 -- # get_header_version suffix 00:09:36.741 17:25:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # cut -f2 00:09:36.741 17:25:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:36.741 17:25:28 version -- app/version.sh@20 -- # suffix=-pre 00:09:36.741 17:25:28 version -- app/version.sh@22 -- # version=25.1 00:09:36.741 17:25:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:36.741 17:25:28 version -- app/version.sh@28 -- # version=25.1rc0 00:09:36.741 17:25:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:36.741 17:25:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:36.741 17:25:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:36.741 17:25:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:36.741 00:09:36.741 real 0m0.274s 00:09:36.741 user 0m0.169s 00:09:36.741 sys 0m0.150s 00:09:36.741 17:25:28 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.741 17:25:28 version -- common/autotest_common.sh@10 -- # set +x 00:09:36.741 ************************************ 00:09:36.741 END TEST version 00:09:36.741 ************************************ 00:09:36.741 17:25:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:36.741 17:25:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:36.741 17:25:28 -- spdk/autotest.sh@194 -- # uname -s 00:09:36.741 17:25:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:36.741 17:25:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:36.741 17:25:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:36.741 17:25:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:36.741 17:25:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:36.741 17:25:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:36.741 17:25:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.741 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 17:25:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:37.002 17:25:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:37.002 17:25:28 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:37.002 17:25:28 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:37.002 17:25:28 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:37.002 17:25:28 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:37.002 17:25:28 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:37.002 17:25:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.002 17:25:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.002 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 ************************************ 00:09:37.003 START TEST nvmf_tcp 00:09:37.003 ************************************ 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:37.003 * Looking for test storage... 00:09:37.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.003 17:25:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.003 --rc genhtml_branch_coverage=1 00:09:37.003 --rc genhtml_function_coverage=1 00:09:37.003 --rc genhtml_legend=1 00:09:37.003 --rc geninfo_all_blocks=1 00:09:37.003 --rc geninfo_unexecuted_blocks=1 00:09:37.003 00:09:37.003 ' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.003 --rc genhtml_branch_coverage=1 00:09:37.003 --rc genhtml_function_coverage=1 00:09:37.003 --rc genhtml_legend=1 00:09:37.003 --rc geninfo_all_blocks=1 00:09:37.003 --rc geninfo_unexecuted_blocks=1 00:09:37.003 00:09:37.003 ' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.003 --rc genhtml_branch_coverage=1 00:09:37.003 --rc genhtml_function_coverage=1 00:09:37.003 --rc genhtml_legend=1 00:09:37.003 --rc geninfo_all_blocks=1 00:09:37.003 --rc geninfo_unexecuted_blocks=1 00:09:37.003 00:09:37.003 ' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.003 --rc genhtml_branch_coverage=1 00:09:37.003 --rc genhtml_function_coverage=1 00:09:37.003 --rc genhtml_legend=1 00:09:37.003 --rc geninfo_all_blocks=1 00:09:37.003 --rc geninfo_unexecuted_blocks=1 00:09:37.003 00:09:37.003 ' 00:09:37.003 17:25:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:37.003 17:25:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:37.003 17:25:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.003 17:25:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.264 17:25:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.264 ************************************ 00:09:37.264 START TEST nvmf_target_core 00:09:37.264 ************************************ 00:09:37.264 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:37.264 * Looking for test storage... 00:09:37.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:37.264 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.265 --rc genhtml_branch_coverage=1 00:09:37.265 --rc genhtml_function_coverage=1 00:09:37.265 --rc genhtml_legend=1 00:09:37.265 --rc geninfo_all_blocks=1 00:09:37.265 --rc geninfo_unexecuted_blocks=1 00:09:37.265 00:09:37.265 ' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.265 --rc genhtml_branch_coverage=1 00:09:37.265 --rc genhtml_function_coverage=1 00:09:37.265 --rc genhtml_legend=1 00:09:37.265 --rc geninfo_all_blocks=1 00:09:37.265 --rc geninfo_unexecuted_blocks=1 00:09:37.265 00:09:37.265 ' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.265 --rc genhtml_branch_coverage=1 00:09:37.265 --rc genhtml_function_coverage=1 00:09:37.265 --rc genhtml_legend=1 00:09:37.265 --rc geninfo_all_blocks=1 00:09:37.265 --rc geninfo_unexecuted_blocks=1 00:09:37.265 00:09:37.265 ' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.265 --rc genhtml_branch_coverage=1 00:09:37.265 --rc genhtml_function_coverage=1 00:09:37.265 --rc genhtml_legend=1 00:09:37.265 --rc geninfo_all_blocks=1 00:09:37.265 --rc geninfo_unexecuted_blocks=1 00:09:37.265 00:09:37.265 ' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.265 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.527 ************************************ 00:09:37.527 START TEST nvmf_abort 00:09:37.527 ************************************ 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.527 * Looking for test storage... 00:09:37.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.527 --rc genhtml_branch_coverage=1 00:09:37.527 --rc genhtml_function_coverage=1 00:09:37.527 --rc genhtml_legend=1 00:09:37.527 --rc geninfo_all_blocks=1 00:09:37.527 --rc geninfo_unexecuted_blocks=1 00:09:37.527 00:09:37.527 ' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.527 --rc genhtml_branch_coverage=1 00:09:37.527 --rc genhtml_function_coverage=1 00:09:37.527 --rc genhtml_legend=1 00:09:37.527 --rc geninfo_all_blocks=1 00:09:37.527 --rc geninfo_unexecuted_blocks=1 00:09:37.527 00:09:37.527 ' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.527 --rc genhtml_branch_coverage=1 00:09:37.527 --rc genhtml_function_coverage=1 00:09:37.527 --rc genhtml_legend=1 00:09:37.527 --rc geninfo_all_blocks=1 00:09:37.527 --rc geninfo_unexecuted_blocks=1 00:09:37.527 00:09:37.527 ' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.527 --rc genhtml_branch_coverage=1 00:09:37.527 --rc genhtml_function_coverage=1 00:09:37.527 --rc genhtml_legend=1 00:09:37.527 --rc geninfo_all_blocks=1 00:09:37.527 --rc geninfo_unexecuted_blocks=1 00:09:37.527 00:09:37.527 ' 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.527 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.528 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.790 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:45.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:45.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:45.955 Found net devices under 0000:31:00.0: cvl_0_0 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:45.955 Found net devices under 0000:31:00.1: cvl_0_1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.955 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.955 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.955 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.955 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.955 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:09:45.955 00:09:45.955 --- 10.0.0.2 ping statistics --- 00:09:45.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.955 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:09:45.955 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:45.955 00:09:45.955 --- 10.0.0.1 ping statistics --- 00:09:45.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.955 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=150020 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 150020 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 150020 ']' 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.956 17:25:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:45.956 [2024-10-08 17:25:37.239853] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:45.956 [2024-10-08 17:25:37.239914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.956 [2024-10-08 17:25:37.327286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.956 [2024-10-08 17:25:37.422298] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.956 [2024-10-08 17:25:37.422360] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.956 [2024-10-08 17:25:37.422368] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.956 [2024-10-08 17:25:37.422375] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.956 [2024-10-08 17:25:37.422381] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.956 [2024-10-08 17:25:37.423825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.956 [2024-10-08 17:25:37.424094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.956 [2024-10-08 17:25:37.424234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 [2024-10-08 17:25:38.133873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 Malloc0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 Delay0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.218 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.481 [2024-10-08 17:25:38.217681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.481 17:25:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:46.481 [2024-10-08 17:25:38.360314] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:49.032 Initializing NVMe Controllers 00:09:49.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:49.032 controller IO queue size 128 less than required 00:09:49.032 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:49.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:49.032 Initialization complete. Launching workers. 00:09:49.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30127 00:09:49.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30188, failed to submit 62 00:09:49.032 success 30131, unsuccessful 57, failed 0 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.032 rmmod nvme_tcp 00:09:49.032 rmmod nvme_fabrics 00:09:49.032 rmmod nvme_keyring 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 150020 ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 150020 ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 150020' 00:09:49.032 killing process with pid 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 150020 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.032 17:25:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.949 00:09:50.949 real 0m13.522s 00:09:50.949 user 0m14.078s 00:09:50.949 sys 0m6.513s 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.949 ************************************ 00:09:50.949 END TEST nvmf_abort 00:09:50.949 ************************************ 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.949 ************************************ 00:09:50.949 START TEST nvmf_ns_hotplug_stress 00:09:50.949 ************************************ 00:09:50.949 17:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:51.212 * Looking for test storage... 00:09:51.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.212 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.213 --rc genhtml_branch_coverage=1 00:09:51.213 --rc genhtml_function_coverage=1 00:09:51.213 --rc genhtml_legend=1 00:09:51.213 --rc geninfo_all_blocks=1 00:09:51.213 --rc geninfo_unexecuted_blocks=1 00:09:51.213 00:09:51.213 ' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.213 --rc genhtml_branch_coverage=1 00:09:51.213 --rc genhtml_function_coverage=1 00:09:51.213 --rc genhtml_legend=1 00:09:51.213 --rc geninfo_all_blocks=1 00:09:51.213 --rc geninfo_unexecuted_blocks=1 00:09:51.213 00:09:51.213 ' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.213 --rc genhtml_branch_coverage=1 00:09:51.213 --rc genhtml_function_coverage=1 00:09:51.213 --rc genhtml_legend=1 00:09:51.213 --rc geninfo_all_blocks=1 00:09:51.213 --rc geninfo_unexecuted_blocks=1 00:09:51.213 00:09:51.213 ' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.213 --rc genhtml_branch_coverage=1 00:09:51.213 --rc genhtml_function_coverage=1 00:09:51.213 --rc genhtml_legend=1 00:09:51.213 --rc geninfo_all_blocks=1 00:09:51.213 --rc geninfo_unexecuted_blocks=1 00:09:51.213 00:09:51.213 ' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.213 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.214 17:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:59.367 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:59.367 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:59.367 Found net devices under 0000:31:00.0: cvl_0_0 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:59.367 Found net devices under 0000:31:00.1: cvl_0_1 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.367 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:09:59.368 00:09:59.368 --- 10.0.0.2 ping statistics --- 00:09:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.368 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:59.368 00:09:59.368 --- 10.0.0.1 ping statistics --- 00:09:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.368 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=155114 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 155114 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 155114 ']' 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.368 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.368 [2024-10-08 17:25:50.881245] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:09:59.368 [2024-10-08 17:25:50.881315] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.368 [2024-10-08 17:25:50.969774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.368 [2024-10-08 17:25:51.062140] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.368 [2024-10-08 17:25:51.062200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.368 [2024-10-08 17:25:51.062208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.368 [2024-10-08 17:25:51.062215] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.368 [2024-10-08 17:25:51.062221] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.368 [2024-10-08 17:25:51.063531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.368 [2024-10-08 17:25:51.063688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.368 [2024-10-08 17:25:51.063689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:59.942 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.942 [2024-10-08 17:25:51.906265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.203 17:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.203 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.465 [2024-10-08 17:25:52.331156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.465 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.726 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:00.988 Malloc0 00:10:00.988 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.988 Delay0 00:10:00.988 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.249 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:01.510 NULL1 00:10:01.510 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:01.772 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=155510 00:10:01.772 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:01.772 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:01.772 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.772 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.033 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:02.033 17:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:02.295 true 00:10:02.295 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:02.295 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.556 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.556 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:02.556 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:02.817 true 00:10:02.817 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:02.817 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.079 17:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.079 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:03.079 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:03.340 true 00:10:03.341 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:03.341 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.603 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.603 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:03.603 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:03.865 true 00:10:03.865 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:03.865 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.127 17:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.127 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:04.127 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:04.388 true 00:10:04.388 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:04.388 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.650 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.910 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:04.910 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:04.910 true 00:10:04.910 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:04.910 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.172 17:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.433 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:05.433 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:05.433 true 00:10:05.433 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:05.433 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.694 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.955 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:05.955 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:05.955 true 00:10:05.955 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:05.955 17:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.216 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.477 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:06.477 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:06.477 true 00:10:06.477 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:06.477 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.738 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.999 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:06.999 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:06.999 true 00:10:07.260 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:07.260 17:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.260 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.520 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:07.520 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:07.781 true 00:10:07.781 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:07.781 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.781 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.043 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:08.043 17:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:08.304 true 00:10:08.304 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:08.304 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.304 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.565 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:08.565 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:08.858 true 00:10:08.858 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:08.858 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.858 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.119 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:09.119 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:09.380 true 00:10:09.380 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:09.380 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.641 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.641 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:09.641 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:09.902 true 00:10:09.902 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:09.903 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.165 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.165 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:10.165 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:10.426 true 00:10:10.426 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:10.426 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.687 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.948 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:10.948 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:10.948 true 00:10:10.948 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:10.948 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.208 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.469 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:11.469 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:11.469 true 00:10:11.469 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:11.469 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.730 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.992 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:11.992 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:11.992 true 00:10:11.992 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:11.992 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.253 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.514 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:12.514 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:12.514 true 00:10:12.776 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:12.776 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.776 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.037 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:13.037 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:13.298 true 00:10:13.298 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:13.298 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.298 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.558 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:13.558 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:13.819 true 00:10:13.819 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:13.819 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.819 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.080 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:14.080 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:14.341 true 00:10:14.341 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:14.341 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.602 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.602 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:14.602 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:14.863 true 00:10:14.863 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:14.863 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.124 17:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.124 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:15.124 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:15.385 true 00:10:15.385 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:15.385 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.646 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.907 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:15.907 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:15.907 true 00:10:15.907 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:15.907 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.167 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.428 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:16.428 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:16.428 true 00:10:16.428 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:16.428 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.689 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.949 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:16.949 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:16.949 true 00:10:16.950 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:16.950 17:26:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.211 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.472 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:17.472 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:17.472 true 00:10:17.473 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:17.473 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.733 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.994 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:17.994 17:26:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:17.994 true 00:10:18.255 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:18.255 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.255 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.516 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:18.516 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:18.778 true 00:10:18.778 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:18.778 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.778 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.040 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:19.040 17:26:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:19.301 true 00:10:19.301 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:19.301 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.563 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.563 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:19.563 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:19.832 true 00:10:19.832 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:19.832 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.093 17:26:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.093 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:20.093 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:20.354 true 00:10:20.354 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:20.354 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.616 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.616 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:20.616 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:20.877 true 00:10:20.877 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:20.877 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.139 17:26:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.400 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:21.400 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:21.400 true 00:10:21.400 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:21.400 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.661 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.922 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:21.922 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:21.922 true 00:10:21.922 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:21.922 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.184 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.447 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:22.447 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:22.447 true 00:10:22.707 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:22.707 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.707 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.968 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:22.968 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:23.230 true 00:10:23.230 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:23.230 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.230 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.491 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:23.491 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:23.752 true 00:10:23.752 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:23.752 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.752 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.016 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:24.016 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:24.277 true 00:10:24.277 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:24.277 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.539 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.539 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:24.539 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:24.800 true 00:10:24.800 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:24.800 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.061 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.061 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:25.061 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:25.322 true 00:10:25.322 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:25.322 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.584 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.845 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:25.845 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:25.845 true 00:10:25.845 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:25.846 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.107 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.368 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:26.368 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:26.368 true 00:10:26.368 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:26.368 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.629 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.890 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:26.890 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:26.890 true 00:10:27.152 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:27.152 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.152 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.413 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:27.413 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:27.674 true 00:10:27.674 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:27.674 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.674 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.935 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:27.935 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:28.197 true 00:10:28.197 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:28.197 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.197 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.457 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:28.458 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:28.719 true 00:10:28.719 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:28.719 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.980 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.980 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:28.980 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:29.242 true 00:10:29.242 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:29.242 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.503 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.764 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:29.764 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:29.764 true 00:10:29.764 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:29.764 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.025 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.287 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:30.287 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:30.287 true 00:10:30.287 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:30.287 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.549 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.810 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:30.810 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:30.811 true 00:10:31.071 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:31.071 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.071 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.332 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:31.332 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:31.593 true 00:10:31.593 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:31.593 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.594 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.854 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:31.854 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:31.854 Initializing NVMe Controllers 00:10:31.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.855 Controller IO queue size 128, less than required. 00:10:31.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:31.855 Initialization complete. Launching workers. 00:10:31.855 ======================================================== 00:10:31.855 Latency(us) 00:10:31.855 Device Information : IOPS MiB/s Average min max 00:10:31.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31249.93 15.26 4095.86 1188.26 7825.77 00:10:31.855 ======================================================== 00:10:31.855 Total : 31249.93 15.26 4095.86 1188.26 7825.77 00:10:31.855 00:10:32.115 true 00:10:32.115 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 155510 00:10:32.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (155510) - No such process 00:10:32.115 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 155510 00:10:32.115 17:26:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.376 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:32.637 null0 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:32.637 null1 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.637 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:32.898 null2 00:10:32.898 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.898 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.898 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:33.159 null3 00:10:33.159 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.159 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.159 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:33.159 null4 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:33.419 null5 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.419 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:33.681 null6 00:10:33.681 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.681 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.681 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:33.943 null7 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.943 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 162208 162210 162213 162216 162219 162222 162226 162229 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.944 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.206 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.206 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.206 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.206 17:26:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.206 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.468 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.731 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.004 17:26:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.265 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.528 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.791 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.053 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.053 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.053 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.053 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.315 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.578 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.840 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.102 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.364 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.625 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:37.626 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.887 rmmod nvme_tcp 00:10:37.887 rmmod nvme_fabrics 00:10:37.887 rmmod nvme_keyring 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 155114 ']' 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 155114 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 155114 ']' 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 155114 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155114 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155114' 00:10:37.887 killing process with pid 155114 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 155114 00:10:37.887 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 155114 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.149 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.067 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.067 00:10:40.067 real 0m49.069s 00:10:40.067 user 3m18.684s 00:10:40.067 sys 0m17.384s 00:10:40.067 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.067 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.067 ************************************ 00:10:40.067 END TEST nvmf_ns_hotplug_stress 00:10:40.067 ************************************ 00:10:40.067 17:26:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:40.067 17:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:40.067 17:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.067 17:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.067 ************************************ 00:10:40.067 START TEST nvmf_delete_subsystem 00:10:40.067 ************************************ 00:10:40.067 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:40.330 * Looking for test storage... 00:10:40.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.330 --rc genhtml_branch_coverage=1 00:10:40.330 --rc genhtml_function_coverage=1 00:10:40.330 --rc genhtml_legend=1 00:10:40.330 --rc geninfo_all_blocks=1 00:10:40.330 --rc geninfo_unexecuted_blocks=1 00:10:40.330 00:10:40.330 ' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.330 --rc genhtml_branch_coverage=1 00:10:40.330 --rc genhtml_function_coverage=1 00:10:40.330 --rc genhtml_legend=1 00:10:40.330 --rc geninfo_all_blocks=1 00:10:40.330 --rc geninfo_unexecuted_blocks=1 00:10:40.330 00:10:40.330 ' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.330 --rc genhtml_branch_coverage=1 00:10:40.330 --rc genhtml_function_coverage=1 00:10:40.330 --rc genhtml_legend=1 00:10:40.330 --rc geninfo_all_blocks=1 00:10:40.330 --rc geninfo_unexecuted_blocks=1 00:10:40.330 00:10:40.330 ' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.330 --rc genhtml_branch_coverage=1 00:10:40.330 --rc genhtml_function_coverage=1 00:10:40.330 --rc genhtml_legend=1 00:10:40.330 --rc geninfo_all_blocks=1 00:10:40.330 --rc geninfo_unexecuted_blocks=1 00:10:40.330 00:10:40.330 ' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.330 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.331 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:48.481 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:48.481 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:48.481 Found net devices under 0000:31:00.0: cvl_0_0 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.481 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:48.482 Found net devices under 0000:31:00.1: cvl_0_1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:10:48.482 00:10:48.482 --- 10.0.0.2 ping statistics --- 00:10:48.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.482 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:48.482 00:10:48.482 --- 10.0.0.1 ping statistics --- 00:10:48.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.482 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=167628 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 167628 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 167628 ']' 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.482 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.482 [2024-10-08 17:26:40.007652] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:10:48.482 [2024-10-08 17:26:40.007716] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.482 [2024-10-08 17:26:40.099559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.482 [2024-10-08 17:26:40.194670] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.482 [2024-10-08 17:26:40.194735] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.482 [2024-10-08 17:26:40.194744] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.482 [2024-10-08 17:26:40.194751] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.482 [2024-10-08 17:26:40.194758] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.482 [2024-10-08 17:26:40.195855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.482 [2024-10-08 17:26:40.195853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 [2024-10-08 17:26:40.894361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 [2024-10-08 17:26:40.918667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 NULL1 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 Delay0 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=167743 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:49.056 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:49.056 [2024-10-08 17:26:41.035892] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:50.975 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.975 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.975 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 [2024-10-08 17:26:43.131958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54afd0 is same with the state(6) to be set 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 starting I/O failed: -6 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 [2024-10-08 17:26:43.137486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa15c00cfe0 is same with the state(6) to be set 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Read completed with error (sct=0, sc=8) 00:10:51.237 Write completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Read completed with error (sct=0, sc=8) 00:10:51.238 Write completed with error (sct=0, sc=8) 00:10:52.181 [2024-10-08 17:26:44.095318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c6b0 is same with the state(6) to be set 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 [2024-10-08 17:26:44.135155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54b1b0 is same with the state(6) to be set 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 [2024-10-08 17:26:44.135480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54b6c0 is same with the state(6) to be set 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 [2024-10-08 17:26:44.137728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa15c00d310 is same with the state(6) to be set 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Write completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 Read completed with error (sct=0, sc=8) 00:10:52.181 [2024-10-08 17:26:44.139848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa15c000c00 is same with the state(6) to be set 00:10:52.181 Initializing NVMe Controllers 00:10:52.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.181 Controller IO queue size 128, less than required. 00:10:52.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:52.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:52.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:52.181 Initialization complete. Launching workers. 00:10:52.181 ======================================================== 00:10:52.181 Latency(us) 00:10:52.181 Device Information : IOPS MiB/s Average min max 00:10:52.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.32 0.08 929974.02 366.75 1006693.86 00:10:52.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.34 0.07 946106.14 337.56 1011915.14 00:10:52.181 ======================================================== 00:10:52.181 Total : 305.67 0.15 937908.71 337.56 1011915.14 00:10:52.181 00:10:52.181 [2024-10-08 17:26:44.140334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54c6b0 (9): Bad file descriptor 00:10:52.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:52.181 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.181 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:52.181 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 167743 00:10:52.181 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:52.753 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:52.753 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 167743 00:10:52.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (167743) - No such process 00:10:52.753 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 167743 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 167743 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 167743 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 [2024-10-08 17:26:44.673044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=168554 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:52.754 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.016 [2024-10-08 17:26:44.758730] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:53.278 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.278 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:53.278 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.850 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.850 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:53.850 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.422 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.422 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:54.422 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.993 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.993 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:54.993 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.253 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.253 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:55.253 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.824 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.824 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:55.824 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.086 Initializing NVMe Controllers 00:10:56.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.086 Controller IO queue size 128, less than required. 00:10:56.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:56.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:56.086 Initialization complete. Launching workers. 00:10:56.086 ======================================================== 00:10:56.086 Latency(us) 00:10:56.086 Device Information : IOPS MiB/s Average min max 00:10:56.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002313.62 1000257.73 1042521.88 00:10:56.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002964.44 1000277.07 1008148.28 00:10:56.086 ======================================================== 00:10:56.086 Total : 256.00 0.12 1002639.03 1000257.73 1042521.88 00:10:56.086 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 168554 00:10:56.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (168554) - No such process 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 168554 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.352 rmmod nvme_tcp 00:10:56.352 rmmod nvme_fabrics 00:10:56.352 rmmod nvme_keyring 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 167628 ']' 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 167628 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 167628 ']' 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 167628 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.352 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167628 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167628' 00:10:56.614 killing process with pid 167628 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 167628 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 167628 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.614 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.164 00:10:59.164 real 0m18.503s 00:10:59.164 user 0m30.785s 00:10:59.164 sys 0m6.852s 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.164 ************************************ 00:10:59.164 END TEST nvmf_delete_subsystem 00:10:59.164 ************************************ 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.164 ************************************ 00:10:59.164 START TEST nvmf_host_management 00:10:59.164 ************************************ 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.164 * Looking for test storage... 00:10:59.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.164 --rc genhtml_branch_coverage=1 00:10:59.164 --rc genhtml_function_coverage=1 00:10:59.164 --rc genhtml_legend=1 00:10:59.164 --rc geninfo_all_blocks=1 00:10:59.164 --rc geninfo_unexecuted_blocks=1 00:10:59.164 00:10:59.164 ' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.164 --rc genhtml_branch_coverage=1 00:10:59.164 --rc genhtml_function_coverage=1 00:10:59.164 --rc genhtml_legend=1 00:10:59.164 --rc geninfo_all_blocks=1 00:10:59.164 --rc geninfo_unexecuted_blocks=1 00:10:59.164 00:10:59.164 ' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.164 --rc genhtml_branch_coverage=1 00:10:59.164 --rc genhtml_function_coverage=1 00:10:59.164 --rc genhtml_legend=1 00:10:59.164 --rc geninfo_all_blocks=1 00:10:59.164 --rc geninfo_unexecuted_blocks=1 00:10:59.164 00:10:59.164 ' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.164 --rc genhtml_branch_coverage=1 00:10:59.164 --rc genhtml_function_coverage=1 00:10:59.164 --rc genhtml_legend=1 00:10:59.164 --rc geninfo_all_blocks=1 00:10:59.164 --rc geninfo_unexecuted_blocks=1 00:10:59.164 00:10:59.164 ' 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.164 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.165 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.320 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:07.321 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:07.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:07.321 Found net devices under 0000:31:00.0: cvl_0_0 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:07.321 Found net devices under 0000:31:00.1: cvl_0_1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:11:07.321 00:11:07.321 --- 10.0.0.2 ping statistics --- 00:11:07.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.321 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:07.321 00:11:07.321 --- 10.0.0.1 ping statistics --- 00:11:07.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.321 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=173674 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 173674 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 173674 ']' 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.321 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.321 [2024-10-08 17:26:58.563636] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:07.321 [2024-10-08 17:26:58.563697] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.321 [2024-10-08 17:26:58.654075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.321 [2024-10-08 17:26:58.750169] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.321 [2024-10-08 17:26:58.750227] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.321 [2024-10-08 17:26:58.750236] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.321 [2024-10-08 17:26:58.750247] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.322 [2024-10-08 17:26:58.750254] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.322 [2024-10-08 17:26:58.752185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.322 [2024-10-08 17:26:58.752411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.322 [2024-10-08 17:26:58.752572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:07.322 [2024-10-08 17:26:58.752575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 [2024-10-08 17:26:59.444995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 Malloc0 00:11:07.584 [2024-10-08 17:26:59.514522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=173797 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 173797 /var/tmp/bdevperf.sock 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 173797 ']' 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:07.584 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:07.584 { 00:11:07.584 "params": { 00:11:07.584 "name": "Nvme$subsystem", 00:11:07.584 "trtype": "$TEST_TRANSPORT", 00:11:07.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:07.584 "adrfam": "ipv4", 00:11:07.584 "trsvcid": "$NVMF_PORT", 00:11:07.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:07.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:07.584 "hdgst": ${hdgst:-false}, 00:11:07.584 "ddgst": ${ddgst:-false} 00:11:07.584 }, 00:11:07.584 "method": "bdev_nvme_attach_controller" 00:11:07.584 } 00:11:07.584 EOF 00:11:07.584 )") 00:11:07.847 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:11:07.847 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:11:07.847 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:11:07.847 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:07.847 "params": { 00:11:07.847 "name": "Nvme0", 00:11:07.847 "trtype": "tcp", 00:11:07.847 "traddr": "10.0.0.2", 00:11:07.847 "adrfam": "ipv4", 00:11:07.847 "trsvcid": "4420", 00:11:07.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:07.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:07.847 "hdgst": false, 00:11:07.847 "ddgst": false 00:11:07.847 }, 00:11:07.847 "method": "bdev_nvme_attach_controller" 00:11:07.847 }' 00:11:07.847 [2024-10-08 17:26:59.622836] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:07.847 [2024-10-08 17:26:59.622900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173797 ] 00:11:07.847 [2024-10-08 17:26:59.708658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.847 [2024-10-08 17:26:59.805368] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.109 Running I/O for 10 seconds... 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.685 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.685 [2024-10-08 17:27:00.530219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225a6b0 is same with the state(6) to be set 00:11:08.685 [2024-10-08 17:27:00.530325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225a6b0 is same with the state(6) to be set 00:11:08.685 [2024-10-08 17:27:00.530591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.685 [2024-10-08 17:27:00.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.685 [2024-10-08 17:27:00.530813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.530987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.530996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.686 [2024-10-08 17:27:00.531525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.686 [2024-10-08 17:27:00.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.687 [2024-10-08 17:27:00.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.531852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1080f60 is same with the state(6) to be set 00:11:08.687 [2024-10-08 17:27:00.531924] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1080f60 was disconnected and freed. reset controller. 00:11:08.687 [2024-10-08 17:27:00.533199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:08.687 task offset: 89216 on job bdev=Nvme0n1 fails 00:11:08.687 00:11:08.687 Latency(us) 00:11:08.687 [2024-10-08T15:27:00.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.687 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:08.687 Job: Nvme0n1 ended in about 0.51 seconds with error 00:11:08.687 Verification LBA range: start 0x0 length 0x400 00:11:08.687 Nvme0n1 : 0.51 1254.90 78.43 125.49 0.00 45159.91 1993.39 38666.24 00:11:08.687 [2024-10-08T15:27:00.679Z] =================================================================================================================== 00:11:08.687 [2024-10-08T15:27:00.679Z] Total : 1254.90 78.43 125.49 0.00 45159.91 1993.39 38666.24 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.687 [2024-10-08 17:27:00.535471] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:08.687 [2024-10-08 17:27:00.535521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe68100 (9): Bad file descriptor 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.687 [2024-10-08 17:27:00.537836] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:08.687 [2024-10-08 17:27:00.538045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:08.687 [2024-10-08 17:27:00.538074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.687 [2024-10-08 17:27:00.538089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:08.687 [2024-10-08 17:27:00.538098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:08.687 [2024-10-08 17:27:00.538106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:08.687 [2024-10-08 17:27:00.538114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe68100 00:11:08.687 [2024-10-08 17:27:00.538139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe68100 (9): Bad file descriptor 00:11:08.687 [2024-10-08 17:27:00.538155] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:11:08.687 [2024-10-08 17:27:00.538164] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:11:08.687 [2024-10-08 17:27:00.538174] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:11:08.687 [2024-10-08 17:27:00.538189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.687 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 173797 00:11:09.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (173797) - No such process 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:09.634 { 00:11:09.634 "params": { 00:11:09.634 "name": "Nvme$subsystem", 00:11:09.634 "trtype": "$TEST_TRANSPORT", 00:11:09.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.634 "adrfam": "ipv4", 00:11:09.634 "trsvcid": "$NVMF_PORT", 00:11:09.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.634 "hdgst": ${hdgst:-false}, 00:11:09.634 "ddgst": ${ddgst:-false} 00:11:09.634 }, 00:11:09.634 "method": "bdev_nvme_attach_controller" 00:11:09.634 } 00:11:09.634 EOF 00:11:09.634 )") 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:11:09.634 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:09.634 "params": { 00:11:09.634 "name": "Nvme0", 00:11:09.634 "trtype": "tcp", 00:11:09.634 "traddr": "10.0.0.2", 00:11:09.634 "adrfam": "ipv4", 00:11:09.634 "trsvcid": "4420", 00:11:09.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:09.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:09.634 "hdgst": false, 00:11:09.634 "ddgst": false 00:11:09.634 }, 00:11:09.634 "method": "bdev_nvme_attach_controller" 00:11:09.634 }' 00:11:09.634 [2024-10-08 17:27:01.606613] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:09.634 [2024-10-08 17:27:01.606670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174298 ] 00:11:09.896 [2024-10-08 17:27:01.686079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.896 [2024-10-08 17:27:01.750287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.157 Running I/O for 1 seconds... 00:11:11.104 1792.00 IOPS, 112.00 MiB/s 00:11:11.104 Latency(us) 00:11:11.104 [2024-10-08T15:27:03.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:11.104 Verification LBA range: start 0x0 length 0x400 00:11:11.104 Nvme0n1 : 1.02 1816.76 113.55 0.00 0.00 34591.09 6417.07 31238.83 00:11:11.104 [2024-10-08T15:27:03.096Z] =================================================================================================================== 00:11:11.104 [2024-10-08T15:27:03.096Z] Total : 1816.76 113.55 0.00 0.00 34591.09 6417.07 31238.83 00:11:11.104 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.366 rmmod nvme_tcp 00:11:11.366 rmmod nvme_fabrics 00:11:11.366 rmmod nvme_keyring 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 173674 ']' 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 173674 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 173674 ']' 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 173674 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173674 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173674' 00:11:11.366 killing process with pid 173674 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 173674 00:11:11.366 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 173674 00:11:11.366 [2024-10-08 17:27:03.350405] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.629 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:13.554 00:11:13.554 real 0m14.821s 00:11:13.554 user 0m23.238s 00:11:13.554 sys 0m6.835s 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.554 ************************************ 00:11:13.554 END TEST nvmf_host_management 00:11:13.554 ************************************ 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.554 ************************************ 00:11:13.554 START TEST nvmf_lvol 00:11:13.554 ************************************ 00:11:13.554 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:13.816 * Looking for test storage... 00:11:13.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:13.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.816 --rc genhtml_branch_coverage=1 00:11:13.816 --rc genhtml_function_coverage=1 00:11:13.816 --rc genhtml_legend=1 00:11:13.816 --rc geninfo_all_blocks=1 00:11:13.816 --rc geninfo_unexecuted_blocks=1 00:11:13.816 00:11:13.816 ' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:13.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.816 --rc genhtml_branch_coverage=1 00:11:13.816 --rc genhtml_function_coverage=1 00:11:13.816 --rc genhtml_legend=1 00:11:13.816 --rc geninfo_all_blocks=1 00:11:13.816 --rc geninfo_unexecuted_blocks=1 00:11:13.816 00:11:13.816 ' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:13.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.816 --rc genhtml_branch_coverage=1 00:11:13.816 --rc genhtml_function_coverage=1 00:11:13.816 --rc genhtml_legend=1 00:11:13.816 --rc geninfo_all_blocks=1 00:11:13.816 --rc geninfo_unexecuted_blocks=1 00:11:13.816 00:11:13.816 ' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:13.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.816 --rc genhtml_branch_coverage=1 00:11:13.816 --rc genhtml_function_coverage=1 00:11:13.816 --rc genhtml_legend=1 00:11:13.816 --rc geninfo_all_blocks=1 00:11:13.816 --rc geninfo_unexecuted_blocks=1 00:11:13.816 00:11:13.816 ' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.816 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.817 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:21.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:21.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.964 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:21.965 Found net devices under 0000:31:00.0: cvl_0_0 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:21.965 Found net devices under 0000:31:00.1: cvl_0_1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:11:21.965 00:11:21.965 --- 10.0.0.2 ping statistics --- 00:11:21.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.965 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:11:21.965 00:11:21.965 --- 10.0.0.1 ping statistics --- 00:11:21.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.965 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=179455 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 179455 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 179455 ']' 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.965 17:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 [2024-10-08 17:27:13.539374] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:21.965 [2024-10-08 17:27:13.539442] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.965 [2024-10-08 17:27:13.628965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.965 [2024-10-08 17:27:13.726837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.965 [2024-10-08 17:27:13.726894] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.965 [2024-10-08 17:27:13.726903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.965 [2024-10-08 17:27:13.726910] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.965 [2024-10-08 17:27:13.726916] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.965 [2024-10-08 17:27:13.728442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.965 [2024-10-08 17:27:13.728602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.965 [2024-10-08 17:27:13.728602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.538 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:22.800 [2024-10-08 17:27:14.544929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.800 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.061 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:23.061 17:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.061 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:23.061 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:23.322 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:23.584 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bc94da8b-c8de-44cc-b17c-16d19847a192 00:11:23.584 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc94da8b-c8de-44cc-b17c-16d19847a192 lvol 20 00:11:23.846 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=be3b6fc8-5d7f-400f-9186-5ddf4d5963ed 00:11:23.846 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:23.846 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be3b6fc8-5d7f-400f-9186-5ddf4d5963ed 00:11:24.107 17:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.368 [2024-10-08 17:27:16.157859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.368 17:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.629 17:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=180157 00:11:24.629 17:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:24.629 17:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:25.571 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot be3b6fc8-5d7f-400f-9186-5ddf4d5963ed MY_SNAPSHOT 00:11:25.832 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6ca75318-c3f7-48f4-a309-25d837964f06 00:11:25.832 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize be3b6fc8-5d7f-400f-9186-5ddf4d5963ed 30 00:11:25.832 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6ca75318-c3f7-48f4-a309-25d837964f06 MY_CLONE 00:11:26.093 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b659dfee-bf13-4761-ade5-34de2094f517 00:11:26.093 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b659dfee-bf13-4761-ade5-34de2094f517 00:11:26.353 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 180157 00:11:36.357 Initializing NVMe Controllers 00:11:36.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:36.357 Controller IO queue size 128, less than required. 00:11:36.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:36.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:36.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:36.357 Initialization complete. Launching workers. 00:11:36.357 ======================================================== 00:11:36.357 Latency(us) 00:11:36.357 Device Information : IOPS MiB/s Average min max 00:11:36.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17083.70 66.73 7493.58 354.63 43189.45 00:11:36.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16195.30 63.26 7903.81 2209.18 54762.68 00:11:36.357 ======================================================== 00:11:36.357 Total : 33279.00 130.00 7693.22 354.63 54762.68 00:11:36.357 00:11:36.357 17:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:36.357 17:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be3b6fc8-5d7f-400f-9186-5ddf4d5963ed 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc94da8b-c8de-44cc-b17c-16d19847a192 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:36.357 rmmod nvme_tcp 00:11:36.357 rmmod nvme_fabrics 00:11:36.357 rmmod nvme_keyring 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 179455 ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 179455 ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 179455' 00:11:36.357 killing process with pid 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 179455 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.357 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.741 00:11:37.741 real 0m24.155s 00:11:37.741 user 1m4.877s 00:11:37.741 sys 0m8.765s 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:37.741 ************************************ 00:11:37.741 END TEST nvmf_lvol 00:11:37.741 ************************************ 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.741 17:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.003 ************************************ 00:11:38.003 START TEST nvmf_lvs_grow 00:11:38.003 ************************************ 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:38.003 * Looking for test storage... 00:11:38.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.003 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.004 --rc genhtml_branch_coverage=1 00:11:38.004 --rc genhtml_function_coverage=1 00:11:38.004 --rc genhtml_legend=1 00:11:38.004 --rc geninfo_all_blocks=1 00:11:38.004 --rc geninfo_unexecuted_blocks=1 00:11:38.004 00:11:38.004 ' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.004 --rc genhtml_branch_coverage=1 00:11:38.004 --rc genhtml_function_coverage=1 00:11:38.004 --rc genhtml_legend=1 00:11:38.004 --rc geninfo_all_blocks=1 00:11:38.004 --rc geninfo_unexecuted_blocks=1 00:11:38.004 00:11:38.004 ' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.004 --rc genhtml_branch_coverage=1 00:11:38.004 --rc genhtml_function_coverage=1 00:11:38.004 --rc genhtml_legend=1 00:11:38.004 --rc geninfo_all_blocks=1 00:11:38.004 --rc geninfo_unexecuted_blocks=1 00:11:38.004 00:11:38.004 ' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.004 --rc genhtml_branch_coverage=1 00:11:38.004 --rc genhtml_function_coverage=1 00:11:38.004 --rc genhtml_legend=1 00:11:38.004 --rc geninfo_all_blocks=1 00:11:38.004 --rc geninfo_unexecuted_blocks=1 00:11:38.004 00:11:38.004 ' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.004 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.265 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.265 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:38.265 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:38.265 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:38.265 17:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.265 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:38.265 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:38.265 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.266 17:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:46.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:46.405 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:46.405 Found net devices under 0000:31:00.0: cvl_0_0 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:46.405 Found net devices under 0000:31:00.1: cvl_0_1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.405 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:11:46.406 00:11:46.406 --- 10.0.0.2 ping statistics --- 00:11:46.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.406 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:46.406 00:11:46.406 --- 10.0.0.1 ping statistics --- 00:11:46.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.406 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=186653 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 186653 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 186653 ']' 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.406 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.406 [2024-10-08 17:27:37.578440] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:46.406 [2024-10-08 17:27:37.578503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.406 [2024-10-08 17:27:37.667518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.406 [2024-10-08 17:27:37.765820] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.406 [2024-10-08 17:27:37.765880] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.406 [2024-10-08 17:27:37.765888] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.406 [2024-10-08 17:27:37.765896] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.406 [2024-10-08 17:27:37.765902] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.406 [2024-10-08 17:27:37.766748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.406 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.406 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:11:46.406 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:46.406 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.406 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:46.667 [2024-10-08 17:27:38.572895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.667 ************************************ 00:11:46.667 START TEST lvs_grow_clean 00:11:46.667 ************************************ 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:46.667 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.668 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.928 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:46.928 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:46.928 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:47.191 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c83ecabf-d775-4b44-9010-9367603e003c 00:11:47.191 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:11:47.191 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:47.452 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:47.452 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:47.452 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c83ecabf-d775-4b44-9010-9367603e003c lvol 150 00:11:47.452 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4c00f750-9821-46cb-8870-4cadc7667c62 00:11:47.713 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.713 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:47.713 [2024-10-08 17:27:39.613347] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:47.713 [2024-10-08 17:27:39.613420] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:47.713 true 00:11:47.713 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:11:47.713 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:47.974 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:47.974 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:48.235 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4c00f750-9821-46cb-8870-4cadc7667c62 00:11:48.235 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:48.496 [2024-10-08 17:27:40.311592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.496 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=187321 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 187321 /var/tmp/bdevperf.sock 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 187321 ']' 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.757 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.758 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:48.758 [2024-10-08 17:27:40.566586] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:11:48.758 [2024-10-08 17:27:40.566668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187321 ] 00:11:48.758 [2024-10-08 17:27:40.652081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.758 [2024-10-08 17:27:40.745878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.700 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.700 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:11:49.700 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:49.700 Nvme0n1 00:11:49.700 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:49.961 [ 00:11:49.961 { 00:11:49.961 "name": "Nvme0n1", 00:11:49.961 "aliases": [ 00:11:49.961 "4c00f750-9821-46cb-8870-4cadc7667c62" 00:11:49.961 ], 00:11:49.961 "product_name": "NVMe disk", 00:11:49.961 "block_size": 4096, 00:11:49.961 "num_blocks": 38912, 00:11:49.961 "uuid": "4c00f750-9821-46cb-8870-4cadc7667c62", 00:11:49.961 "numa_id": 0, 00:11:49.961 "assigned_rate_limits": { 00:11:49.961 "rw_ios_per_sec": 0, 00:11:49.961 "rw_mbytes_per_sec": 0, 00:11:49.961 "r_mbytes_per_sec": 0, 00:11:49.961 "w_mbytes_per_sec": 0 00:11:49.961 }, 00:11:49.961 "claimed": false, 00:11:49.961 "zoned": false, 00:11:49.961 "supported_io_types": { 00:11:49.961 "read": true, 00:11:49.961 "write": true, 00:11:49.961 "unmap": true, 00:11:49.961 "flush": true, 00:11:49.961 "reset": true, 00:11:49.961 "nvme_admin": true, 00:11:49.961 "nvme_io": true, 00:11:49.961 "nvme_io_md": false, 00:11:49.961 "write_zeroes": true, 00:11:49.961 "zcopy": false, 00:11:49.961 "get_zone_info": false, 00:11:49.961 "zone_management": false, 00:11:49.961 "zone_append": false, 00:11:49.961 "compare": true, 00:11:49.961 "compare_and_write": true, 00:11:49.961 "abort": true, 00:11:49.961 "seek_hole": false, 00:11:49.961 "seek_data": false, 00:11:49.961 "copy": true, 00:11:49.961 "nvme_iov_md": false 00:11:49.961 }, 00:11:49.961 "memory_domains": [ 00:11:49.961 { 00:11:49.961 "dma_device_id": "system", 00:11:49.961 "dma_device_type": 1 00:11:49.961 } 00:11:49.961 ], 00:11:49.961 "driver_specific": { 00:11:49.961 "nvme": [ 00:11:49.961 { 00:11:49.961 "trid": { 00:11:49.961 "trtype": "TCP", 00:11:49.961 "adrfam": "IPv4", 00:11:49.961 "traddr": "10.0.0.2", 00:11:49.961 "trsvcid": "4420", 00:11:49.961 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:49.961 }, 00:11:49.961 "ctrlr_data": { 00:11:49.961 "cntlid": 1, 00:11:49.961 "vendor_id": "0x8086", 00:11:49.961 "model_number": "SPDK bdev Controller", 00:11:49.961 "serial_number": "SPDK0", 00:11:49.961 "firmware_revision": "25.01", 00:11:49.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:49.961 "oacs": { 00:11:49.961 "security": 0, 00:11:49.961 "format": 0, 00:11:49.961 "firmware": 0, 00:11:49.961 "ns_manage": 0 00:11:49.961 }, 00:11:49.961 "multi_ctrlr": true, 00:11:49.961 "ana_reporting": false 00:11:49.961 }, 00:11:49.961 "vs": { 00:11:49.961 "nvme_version": "1.3" 00:11:49.961 }, 00:11:49.961 "ns_data": { 00:11:49.961 "id": 1, 00:11:49.961 "can_share": true 00:11:49.961 } 00:11:49.961 } 00:11:49.961 ], 00:11:49.961 "mp_policy": "active_passive" 00:11:49.961 } 00:11:49.961 } 00:11:49.961 ] 00:11:49.961 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=187654 00:11:49.961 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:49.961 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:49.961 Running I/O for 10 seconds... 00:11:51.346 Latency(us) 00:11:51.346 [2024-10-08T15:27:43.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.346 Nvme0n1 : 1.00 25016.00 97.72 0.00 0.00 0.00 0.00 0.00 00:11:51.346 [2024-10-08T15:27:43.338Z] =================================================================================================================== 00:11:51.346 [2024-10-08T15:27:43.338Z] Total : 25016.00 97.72 0.00 0.00 0.00 0.00 0.00 00:11:51.346 00:11:51.918 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c83ecabf-d775-4b44-9010-9367603e003c 00:11:51.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.918 Nvme0n1 : 2.00 25212.00 98.48 0.00 0.00 0.00 0.00 0.00 00:11:51.918 [2024-10-08T15:27:43.910Z] =================================================================================================================== 00:11:51.918 [2024-10-08T15:27:43.910Z] Total : 25212.00 98.48 0.00 0.00 0.00 0.00 0.00 00:11:51.918 00:11:52.178 true 00:11:52.178 17:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:11:52.178 17:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:52.439 17:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:52.439 17:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:52.439 17:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 187654 00:11:53.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.011 Nvme0n1 : 3.00 25283.33 98.76 0.00 0.00 0.00 0.00 0.00 00:11:53.011 [2024-10-08T15:27:45.003Z] =================================================================================================================== 00:11:53.011 [2024-10-08T15:27:45.003Z] Total : 25283.33 98.76 0.00 0.00 0.00 0.00 0.00 00:11:53.011 00:11:53.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.955 Nvme0n1 : 4.00 25342.25 98.99 0.00 0.00 0.00 0.00 0.00 00:11:53.955 [2024-10-08T15:27:45.947Z] =================================================================================================================== 00:11:53.955 [2024-10-08T15:27:45.947Z] Total : 25342.25 98.99 0.00 0.00 0.00 0.00 0.00 00:11:53.955 00:11:55.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.343 Nvme0n1 : 5.00 25383.60 99.15 0.00 0.00 0.00 0.00 0.00 00:11:55.343 [2024-10-08T15:27:47.335Z] =================================================================================================================== 00:11:55.343 [2024-10-08T15:27:47.335Z] Total : 25383.60 99.15 0.00 0.00 0.00 0.00 0.00 00:11:55.343 00:11:56.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.285 Nvme0n1 : 6.00 25413.83 99.27 0.00 0.00 0.00 0.00 0.00 00:11:56.285 [2024-10-08T15:27:48.277Z] =================================================================================================================== 00:11:56.285 [2024-10-08T15:27:48.277Z] Total : 25413.83 99.27 0.00 0.00 0.00 0.00 0.00 00:11:56.285 00:11:57.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.230 Nvme0n1 : 7.00 25440.43 99.38 0.00 0.00 0.00 0.00 0.00 00:11:57.230 [2024-10-08T15:27:49.222Z] =================================================================================================================== 00:11:57.230 [2024-10-08T15:27:49.222Z] Total : 25440.43 99.38 0.00 0.00 0.00 0.00 0.00 00:11:57.230 00:11:58.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.171 Nvme0n1 : 8.00 25456.62 99.44 0.00 0.00 0.00 0.00 0.00 00:11:58.171 [2024-10-08T15:27:50.163Z] =================================================================================================================== 00:11:58.171 [2024-10-08T15:27:50.163Z] Total : 25456.62 99.44 0.00 0.00 0.00 0.00 0.00 00:11:58.171 00:11:59.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.116 Nvme0n1 : 9.00 25468.78 99.49 0.00 0.00 0.00 0.00 0.00 00:11:59.116 [2024-10-08T15:27:51.108Z] =================================================================================================================== 00:11:59.116 [2024-10-08T15:27:51.108Z] Total : 25468.78 99.49 0.00 0.00 0.00 0.00 0.00 00:11:59.117 00:12:00.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.059 Nvme0n1 : 10.00 25481.90 99.54 0.00 0.00 0.00 0.00 0.00 00:12:00.059 [2024-10-08T15:27:52.051Z] =================================================================================================================== 00:12:00.059 [2024-10-08T15:27:52.051Z] Total : 25481.90 99.54 0.00 0.00 0.00 0.00 0.00 00:12:00.059 00:12:00.059 00:12:00.059 Latency(us) 00:12:00.059 [2024-10-08T15:27:52.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.059 Nvme0n1 : 10.00 25479.67 99.53 0.00 0.00 5020.07 2498.56 14854.83 00:12:00.059 [2024-10-08T15:27:52.051Z] =================================================================================================================== 00:12:00.059 [2024-10-08T15:27:52.051Z] Total : 25479.67 99.53 0.00 0.00 5020.07 2498.56 14854.83 00:12:00.059 { 00:12:00.059 "results": [ 00:12:00.059 { 00:12:00.059 "job": "Nvme0n1", 00:12:00.059 "core_mask": "0x2", 00:12:00.059 "workload": "randwrite", 00:12:00.059 "status": "finished", 00:12:00.059 "queue_depth": 128, 00:12:00.059 "io_size": 4096, 00:12:00.059 "runtime": 10.003346, 00:12:00.059 "iops": 25479.674500911995, 00:12:00.059 "mibps": 99.52997851918748, 00:12:00.059 "io_failed": 0, 00:12:00.059 "io_timeout": 0, 00:12:00.059 "avg_latency_us": 5020.073683037641, 00:12:00.059 "min_latency_us": 2498.56, 00:12:00.059 "max_latency_us": 14854.826666666666 00:12:00.059 } 00:12:00.059 ], 00:12:00.059 "core_count": 1 00:12:00.059 } 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 187321 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 187321 ']' 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 187321 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.059 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 187321 00:12:00.059 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:00.059 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:00.059 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 187321' 00:12:00.059 killing process with pid 187321 00:12:00.059 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 187321 00:12:00.059 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.059 00:12:00.059 Latency(us) 00:12:00.059 [2024-10-08T15:27:52.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.059 [2024-10-08T15:27:52.051Z] =================================================================================================================== 00:12:00.059 [2024-10-08T15:27:52.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.059 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 187321 00:12:00.320 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.580 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:00.580 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:00.580 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:00.840 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:00.840 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:00.841 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:01.102 [2024-10-08 17:27:52.840543] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:01.102 17:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:01.102 request: 00:12:01.102 { 00:12:01.102 "uuid": "c83ecabf-d775-4b44-9010-9367603e003c", 00:12:01.102 "method": "bdev_lvol_get_lvstores", 00:12:01.102 "req_id": 1 00:12:01.102 } 00:12:01.102 Got JSON-RPC error response 00:12:01.102 response: 00:12:01.102 { 00:12:01.102 "code": -19, 00:12:01.102 "message": "No such device" 00:12:01.102 } 00:12:01.102 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:01.102 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.102 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.102 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.102 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:01.363 aio_bdev 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4c00f750-9821-46cb-8870-4cadc7667c62 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4c00f750-9821-46cb-8870-4cadc7667c62 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:01.363 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:01.624 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4c00f750-9821-46cb-8870-4cadc7667c62 -t 2000 00:12:01.624 [ 00:12:01.624 { 00:12:01.624 "name": "4c00f750-9821-46cb-8870-4cadc7667c62", 00:12:01.624 "aliases": [ 00:12:01.624 "lvs/lvol" 00:12:01.624 ], 00:12:01.624 "product_name": "Logical Volume", 00:12:01.624 "block_size": 4096, 00:12:01.624 "num_blocks": 38912, 00:12:01.624 "uuid": "4c00f750-9821-46cb-8870-4cadc7667c62", 00:12:01.624 "assigned_rate_limits": { 00:12:01.624 "rw_ios_per_sec": 0, 00:12:01.624 "rw_mbytes_per_sec": 0, 00:12:01.624 "r_mbytes_per_sec": 0, 00:12:01.624 "w_mbytes_per_sec": 0 00:12:01.624 }, 00:12:01.624 "claimed": false, 00:12:01.624 "zoned": false, 00:12:01.624 "supported_io_types": { 00:12:01.624 "read": true, 00:12:01.624 "write": true, 00:12:01.624 "unmap": true, 00:12:01.624 "flush": false, 00:12:01.624 "reset": true, 00:12:01.624 "nvme_admin": false, 00:12:01.624 "nvme_io": false, 00:12:01.624 "nvme_io_md": false, 00:12:01.624 "write_zeroes": true, 00:12:01.624 "zcopy": false, 00:12:01.624 "get_zone_info": false, 00:12:01.624 "zone_management": false, 00:12:01.624 "zone_append": false, 00:12:01.624 "compare": false, 00:12:01.624 "compare_and_write": false, 00:12:01.624 "abort": false, 00:12:01.624 "seek_hole": true, 00:12:01.624 "seek_data": true, 00:12:01.624 "copy": false, 00:12:01.624 "nvme_iov_md": false 00:12:01.624 }, 00:12:01.624 "driver_specific": { 00:12:01.625 "lvol": { 00:12:01.625 "lvol_store_uuid": "c83ecabf-d775-4b44-9010-9367603e003c", 00:12:01.625 "base_bdev": "aio_bdev", 00:12:01.625 "thin_provision": false, 00:12:01.625 "num_allocated_clusters": 38, 00:12:01.625 "snapshot": false, 00:12:01.625 "clone": false, 00:12:01.625 "esnap_clone": false 00:12:01.625 } 00:12:01.625 } 00:12:01.625 } 00:12:01.625 ] 00:12:01.625 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:12:01.625 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:01.625 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:01.885 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:01.885 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:01.885 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:02.146 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:02.146 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c00f750-9821-46cb-8870-4cadc7667c62 00:12:02.146 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c83ecabf-d775-4b44-9010-9367603e003c 00:12:02.407 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:02.407 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:02.668 00:12:02.668 real 0m15.766s 00:12:02.668 user 0m15.463s 00:12:02.668 sys 0m1.412s 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 ************************************ 00:12:02.668 END TEST lvs_grow_clean 00:12:02.668 ************************************ 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 ************************************ 00:12:02.668 START TEST lvs_grow_dirty 00:12:02.668 ************************************ 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:02.668 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:02.929 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:02.929 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:02.929 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b926319f-8337-422d-aedd-048bb96e1eba 00:12:02.929 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:02.929 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:03.189 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:03.189 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:03.189 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b926319f-8337-422d-aedd-048bb96e1eba lvol 150 00:12:03.450 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:03.450 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:03.450 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:03.450 [2024-10-08 17:27:55.383282] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:03.450 [2024-10-08 17:27:55.383322] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:03.450 true 00:12:03.450 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:03.450 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:03.711 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:03.711 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:03.971 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:03.971 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:04.231 [2024-10-08 17:27:56.057237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.232 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=190528 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 190528 /var/tmp/bdevperf.sock 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 190528 ']' 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:04.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.492 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.492 [2024-10-08 17:27:56.285468] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:04.492 [2024-10-08 17:27:56.285520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190528 ] 00:12:04.492 [2024-10-08 17:27:56.363744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.492 [2024-10-08 17:27:56.417647] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.432 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.432 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:05.432 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:05.432 Nvme0n1 00:12:05.432 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:05.693 [ 00:12:05.693 { 00:12:05.693 "name": "Nvme0n1", 00:12:05.693 "aliases": [ 00:12:05.693 "e063b79c-c23f-49d5-9fbd-95716c6640ce" 00:12:05.693 ], 00:12:05.693 "product_name": "NVMe disk", 00:12:05.693 "block_size": 4096, 00:12:05.693 "num_blocks": 38912, 00:12:05.693 "uuid": "e063b79c-c23f-49d5-9fbd-95716c6640ce", 00:12:05.693 "numa_id": 0, 00:12:05.693 "assigned_rate_limits": { 00:12:05.693 "rw_ios_per_sec": 0, 00:12:05.693 "rw_mbytes_per_sec": 0, 00:12:05.693 "r_mbytes_per_sec": 0, 00:12:05.693 "w_mbytes_per_sec": 0 00:12:05.693 }, 00:12:05.693 "claimed": false, 00:12:05.693 "zoned": false, 00:12:05.693 "supported_io_types": { 00:12:05.693 "read": true, 00:12:05.693 "write": true, 00:12:05.693 "unmap": true, 00:12:05.693 "flush": true, 00:12:05.693 "reset": true, 00:12:05.693 "nvme_admin": true, 00:12:05.693 "nvme_io": true, 00:12:05.693 "nvme_io_md": false, 00:12:05.693 "write_zeroes": true, 00:12:05.693 "zcopy": false, 00:12:05.693 "get_zone_info": false, 00:12:05.693 "zone_management": false, 00:12:05.693 "zone_append": false, 00:12:05.693 "compare": true, 00:12:05.693 "compare_and_write": true, 00:12:05.693 "abort": true, 00:12:05.693 "seek_hole": false, 00:12:05.693 "seek_data": false, 00:12:05.693 "copy": true, 00:12:05.693 "nvme_iov_md": false 00:12:05.693 }, 00:12:05.693 "memory_domains": [ 00:12:05.693 { 00:12:05.693 "dma_device_id": "system", 00:12:05.693 "dma_device_type": 1 00:12:05.693 } 00:12:05.693 ], 00:12:05.693 "driver_specific": { 00:12:05.693 "nvme": [ 00:12:05.693 { 00:12:05.693 "trid": { 00:12:05.693 "trtype": "TCP", 00:12:05.693 "adrfam": "IPv4", 00:12:05.693 "traddr": "10.0.0.2", 00:12:05.693 "trsvcid": "4420", 00:12:05.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:05.693 }, 00:12:05.693 "ctrlr_data": { 00:12:05.693 "cntlid": 1, 00:12:05.693 "vendor_id": "0x8086", 00:12:05.693 "model_number": "SPDK bdev Controller", 00:12:05.693 "serial_number": "SPDK0", 00:12:05.693 "firmware_revision": "25.01", 00:12:05.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:05.693 "oacs": { 00:12:05.693 "security": 0, 00:12:05.693 "format": 0, 00:12:05.693 "firmware": 0, 00:12:05.693 "ns_manage": 0 00:12:05.693 }, 00:12:05.693 "multi_ctrlr": true, 00:12:05.693 "ana_reporting": false 00:12:05.693 }, 00:12:05.693 "vs": { 00:12:05.693 "nvme_version": "1.3" 00:12:05.693 }, 00:12:05.693 "ns_data": { 00:12:05.693 "id": 1, 00:12:05.693 "can_share": true 00:12:05.693 } 00:12:05.693 } 00:12:05.693 ], 00:12:05.693 "mp_policy": "active_passive" 00:12:05.693 } 00:12:05.693 } 00:12:05.693 ] 00:12:05.693 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=190759 00:12:05.693 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:05.693 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:05.693 Running I/O for 10 seconds... 00:12:07.077 Latency(us) 00:12:07.077 [2024-10-08T15:27:59.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.077 Nvme0n1 : 1.00 25240.00 98.59 0.00 0.00 0.00 0.00 0.00 00:12:07.077 [2024-10-08T15:27:59.069Z] =================================================================================================================== 00:12:07.077 [2024-10-08T15:27:59.069Z] Total : 25240.00 98.59 0.00 0.00 0.00 0.00 0.00 00:12:07.077 00:12:07.648 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:07.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.909 Nvme0n1 : 2.00 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:12:07.909 [2024-10-08T15:27:59.901Z] =================================================================================================================== 00:12:07.909 [2024-10-08T15:27:59.901Z] Total : 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:12:07.909 00:12:07.909 true 00:12:07.909 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:07.909 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:08.169 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:08.169 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:08.169 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 190759 00:12:08.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.740 Nvme0n1 : 3.00 25388.33 99.17 0.00 0.00 0.00 0.00 0.00 00:12:08.740 [2024-10-08T15:28:00.732Z] =================================================================================================================== 00:12:08.740 [2024-10-08T15:28:00.732Z] Total : 25388.33 99.17 0.00 0.00 0.00 0.00 0.00 00:12:08.740 00:12:09.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.681 Nvme0n1 : 4.00 25440.50 99.38 0.00 0.00 0.00 0.00 0.00 00:12:09.681 [2024-10-08T15:28:01.673Z] =================================================================================================================== 00:12:09.681 [2024-10-08T15:28:01.673Z] Total : 25440.50 99.38 0.00 0.00 0.00 0.00 0.00 00:12:09.681 00:12:11.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.067 Nvme0n1 : 5.00 25471.80 99.50 0.00 0.00 0.00 0.00 0.00 00:12:11.067 [2024-10-08T15:28:03.059Z] =================================================================================================================== 00:12:11.067 [2024-10-08T15:28:03.059Z] Total : 25471.80 99.50 0.00 0.00 0.00 0.00 0.00 00:12:11.067 00:12:12.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.012 Nvme0n1 : 6.00 25504.00 99.62 0.00 0.00 0.00 0.00 0.00 00:12:12.013 [2024-10-08T15:28:04.005Z] =================================================================================================================== 00:12:12.013 [2024-10-08T15:28:04.005Z] Total : 25504.00 99.62 0.00 0.00 0.00 0.00 0.00 00:12:12.013 00:12:12.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.958 Nvme0n1 : 7.00 25526.86 99.71 0.00 0.00 0.00 0.00 0.00 00:12:12.958 [2024-10-08T15:28:04.950Z] =================================================================================================================== 00:12:12.958 [2024-10-08T15:28:04.950Z] Total : 25526.86 99.71 0.00 0.00 0.00 0.00 0.00 00:12:12.958 00:12:13.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.902 Nvme0n1 : 8.00 25543.88 99.78 0.00 0.00 0.00 0.00 0.00 00:12:13.902 [2024-10-08T15:28:05.894Z] =================================================================================================================== 00:12:13.902 [2024-10-08T15:28:05.894Z] Total : 25543.88 99.78 0.00 0.00 0.00 0.00 0.00 00:12:13.902 00:12:14.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.846 Nvme0n1 : 9.00 25556.89 99.83 0.00 0.00 0.00 0.00 0.00 00:12:14.846 [2024-10-08T15:28:06.838Z] =================================================================================================================== 00:12:14.846 [2024-10-08T15:28:06.838Z] Total : 25556.89 99.83 0.00 0.00 0.00 0.00 0.00 00:12:14.846 00:12:15.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.790 Nvme0n1 : 10.00 25567.00 99.87 0.00 0.00 0.00 0.00 0.00 00:12:15.790 [2024-10-08T15:28:07.782Z] =================================================================================================================== 00:12:15.790 [2024-10-08T15:28:07.782Z] Total : 25567.00 99.87 0.00 0.00 0.00 0.00 0.00 00:12:15.790 00:12:15.790 00:12:15.790 Latency(us) 00:12:15.790 [2024-10-08T15:28:07.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.790 Nvme0n1 : 10.01 25566.51 99.87 0.00 0.00 5003.39 1501.87 8683.52 00:12:15.790 [2024-10-08T15:28:07.782Z] =================================================================================================================== 00:12:15.790 [2024-10-08T15:28:07.782Z] Total : 25566.51 99.87 0.00 0.00 5003.39 1501.87 8683.52 00:12:15.790 { 00:12:15.790 "results": [ 00:12:15.790 { 00:12:15.790 "job": "Nvme0n1", 00:12:15.790 "core_mask": "0x2", 00:12:15.790 "workload": "randwrite", 00:12:15.790 "status": "finished", 00:12:15.790 "queue_depth": 128, 00:12:15.790 "io_size": 4096, 00:12:15.790 "runtime": 10.005199, 00:12:15.790 "iops": 25566.507972505096, 00:12:15.790 "mibps": 99.86917176759803, 00:12:15.790 "io_failed": 0, 00:12:15.790 "io_timeout": 0, 00:12:15.790 "avg_latency_us": 5003.3930164687235, 00:12:15.790 "min_latency_us": 1501.8666666666666, 00:12:15.790 "max_latency_us": 8683.52 00:12:15.790 } 00:12:15.790 ], 00:12:15.790 "core_count": 1 00:12:15.790 } 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 190528 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 190528 ']' 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 190528 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 190528 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 190528' 00:12:15.790 killing process with pid 190528 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 190528 00:12:15.790 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.790 00:12:15.790 Latency(us) 00:12:15.790 [2024-10-08T15:28:07.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.790 [2024-10-08T15:28:07.782Z] =================================================================================================================== 00:12:15.790 [2024-10-08T15:28:07.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:15.790 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 190528 00:12:16.052 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:16.313 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:16.313 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:16.313 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 186653 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 186653 00:12:16.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 186653 Killed "${NVMF_APP[@]}" "$@" 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=193099 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 193099 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 193099 ']' 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.573 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.573 [2024-10-08 17:28:08.558409] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:16.573 [2024-10-08 17:28:08.558467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.834 [2024-10-08 17:28:08.642913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.834 [2024-10-08 17:28:08.696864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.834 [2024-10-08 17:28:08.696896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.834 [2024-10-08 17:28:08.696901] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.834 [2024-10-08 17:28:08.696906] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.834 [2024-10-08 17:28:08.696910] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.834 [2024-10-08 17:28:08.697360] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.405 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.405 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:17.405 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:17.406 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.406 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.406 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:17.667 [2024-10-08 17:28:09.542648] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:17.667 [2024-10-08 17:28:09.542725] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:17.667 [2024-10-08 17:28:09.542747] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.667 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:17.929 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e063b79c-c23f-49d5-9fbd-95716c6640ce -t 2000 00:12:17.929 [ 00:12:17.929 { 00:12:17.929 "name": "e063b79c-c23f-49d5-9fbd-95716c6640ce", 00:12:17.929 "aliases": [ 00:12:17.929 "lvs/lvol" 00:12:17.929 ], 00:12:17.929 "product_name": "Logical Volume", 00:12:17.929 "block_size": 4096, 00:12:17.929 "num_blocks": 38912, 00:12:17.929 "uuid": "e063b79c-c23f-49d5-9fbd-95716c6640ce", 00:12:17.929 "assigned_rate_limits": { 00:12:17.929 "rw_ios_per_sec": 0, 00:12:17.929 "rw_mbytes_per_sec": 0, 00:12:17.929 "r_mbytes_per_sec": 0, 00:12:17.929 "w_mbytes_per_sec": 0 00:12:17.929 }, 00:12:17.929 "claimed": false, 00:12:17.929 "zoned": false, 00:12:17.929 "supported_io_types": { 00:12:17.929 "read": true, 00:12:17.929 "write": true, 00:12:17.929 "unmap": true, 00:12:17.929 "flush": false, 00:12:17.929 "reset": true, 00:12:17.929 "nvme_admin": false, 00:12:17.929 "nvme_io": false, 00:12:17.929 "nvme_io_md": false, 00:12:17.929 "write_zeroes": true, 00:12:17.929 "zcopy": false, 00:12:17.929 "get_zone_info": false, 00:12:17.929 "zone_management": false, 00:12:17.929 "zone_append": false, 00:12:17.929 "compare": false, 00:12:17.929 "compare_and_write": false, 00:12:17.929 "abort": false, 00:12:17.929 "seek_hole": true, 00:12:17.929 "seek_data": true, 00:12:17.929 "copy": false, 00:12:17.929 "nvme_iov_md": false 00:12:17.929 }, 00:12:17.929 "driver_specific": { 00:12:17.929 "lvol": { 00:12:17.929 "lvol_store_uuid": "b926319f-8337-422d-aedd-048bb96e1eba", 00:12:17.929 "base_bdev": "aio_bdev", 00:12:17.929 "thin_provision": false, 00:12:17.929 "num_allocated_clusters": 38, 00:12:17.929 "snapshot": false, 00:12:17.929 "clone": false, 00:12:17.929 "esnap_clone": false 00:12:17.929 } 00:12:17.929 } 00:12:17.929 } 00:12:17.929 ] 00:12:17.929 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:17.929 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:17.929 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:18.190 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:18.190 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:18.190 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:18.451 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:18.451 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:18.451 [2024-10-08 17:28:10.407853] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:18.712 request: 00:12:18.712 { 00:12:18.712 "uuid": "b926319f-8337-422d-aedd-048bb96e1eba", 00:12:18.712 "method": "bdev_lvol_get_lvstores", 00:12:18.712 "req_id": 1 00:12:18.712 } 00:12:18.712 Got JSON-RPC error response 00:12:18.712 response: 00:12:18.712 { 00:12:18.712 "code": -19, 00:12:18.712 "message": "No such device" 00:12:18.712 } 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.712 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:18.973 aio_bdev 00:12:18.973 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:18.973 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:18.974 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.974 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:18.974 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.974 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.974 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:19.235 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e063b79c-c23f-49d5-9fbd-95716c6640ce -t 2000 00:12:19.235 [ 00:12:19.235 { 00:12:19.235 "name": "e063b79c-c23f-49d5-9fbd-95716c6640ce", 00:12:19.235 "aliases": [ 00:12:19.235 "lvs/lvol" 00:12:19.235 ], 00:12:19.235 "product_name": "Logical Volume", 00:12:19.235 "block_size": 4096, 00:12:19.235 "num_blocks": 38912, 00:12:19.235 "uuid": "e063b79c-c23f-49d5-9fbd-95716c6640ce", 00:12:19.235 "assigned_rate_limits": { 00:12:19.235 "rw_ios_per_sec": 0, 00:12:19.235 "rw_mbytes_per_sec": 0, 00:12:19.235 "r_mbytes_per_sec": 0, 00:12:19.235 "w_mbytes_per_sec": 0 00:12:19.235 }, 00:12:19.235 "claimed": false, 00:12:19.235 "zoned": false, 00:12:19.235 "supported_io_types": { 00:12:19.235 "read": true, 00:12:19.235 "write": true, 00:12:19.235 "unmap": true, 00:12:19.235 "flush": false, 00:12:19.235 "reset": true, 00:12:19.235 "nvme_admin": false, 00:12:19.235 "nvme_io": false, 00:12:19.235 "nvme_io_md": false, 00:12:19.235 "write_zeroes": true, 00:12:19.235 "zcopy": false, 00:12:19.235 "get_zone_info": false, 00:12:19.235 "zone_management": false, 00:12:19.235 "zone_append": false, 00:12:19.235 "compare": false, 00:12:19.235 "compare_and_write": false, 00:12:19.235 "abort": false, 00:12:19.235 "seek_hole": true, 00:12:19.235 "seek_data": true, 00:12:19.235 "copy": false, 00:12:19.235 "nvme_iov_md": false 00:12:19.235 }, 00:12:19.235 "driver_specific": { 00:12:19.235 "lvol": { 00:12:19.235 "lvol_store_uuid": "b926319f-8337-422d-aedd-048bb96e1eba", 00:12:19.235 "base_bdev": "aio_bdev", 00:12:19.235 "thin_provision": false, 00:12:19.235 "num_allocated_clusters": 38, 00:12:19.235 "snapshot": false, 00:12:19.235 "clone": false, 00:12:19.235 "esnap_clone": false 00:12:19.235 } 00:12:19.235 } 00:12:19.235 } 00:12:19.235 ] 00:12:19.235 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:19.235 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:19.235 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:19.496 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:19.496 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:19.496 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:19.757 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:19.757 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e063b79c-c23f-49d5-9fbd-95716c6640ce 00:12:19.757 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b926319f-8337-422d-aedd-048bb96e1eba 00:12:20.017 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:20.279 00:12:20.279 real 0m17.603s 00:12:20.279 user 0m45.774s 00:12:20.279 sys 0m3.020s 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:20.279 ************************************ 00:12:20.279 END TEST lvs_grow_dirty 00:12:20.279 ************************************ 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:20.279 nvmf_trace.0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.279 rmmod nvme_tcp 00:12:20.279 rmmod nvme_fabrics 00:12:20.279 rmmod nvme_keyring 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 193099 ']' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 193099 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 193099 ']' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 193099 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.279 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 193099 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 193099' 00:12:20.540 killing process with pid 193099 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 193099 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 193099 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.540 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.088 00:12:23.088 real 0m44.741s 00:12:23.088 user 1m7.635s 00:12:23.088 sys 0m10.548s 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:23.088 ************************************ 00:12:23.088 END TEST nvmf_lvs_grow 00:12:23.088 ************************************ 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:23.088 ************************************ 00:12:23.088 START TEST nvmf_bdev_io_wait 00:12:23.088 ************************************ 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:23.088 * Looking for test storage... 00:12:23.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.088 --rc genhtml_branch_coverage=1 00:12:23.088 --rc genhtml_function_coverage=1 00:12:23.088 --rc genhtml_legend=1 00:12:23.088 --rc geninfo_all_blocks=1 00:12:23.088 --rc geninfo_unexecuted_blocks=1 00:12:23.088 00:12:23.088 ' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.088 --rc genhtml_branch_coverage=1 00:12:23.088 --rc genhtml_function_coverage=1 00:12:23.088 --rc genhtml_legend=1 00:12:23.088 --rc geninfo_all_blocks=1 00:12:23.088 --rc geninfo_unexecuted_blocks=1 00:12:23.088 00:12:23.088 ' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.088 --rc genhtml_branch_coverage=1 00:12:23.088 --rc genhtml_function_coverage=1 00:12:23.088 --rc genhtml_legend=1 00:12:23.088 --rc geninfo_all_blocks=1 00:12:23.088 --rc geninfo_unexecuted_blocks=1 00:12:23.088 00:12:23.088 ' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.088 --rc genhtml_branch_coverage=1 00:12:23.088 --rc genhtml_function_coverage=1 00:12:23.088 --rc genhtml_legend=1 00:12:23.088 --rc geninfo_all_blocks=1 00:12:23.088 --rc geninfo_unexecuted_blocks=1 00:12:23.088 00:12:23.088 ' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.088 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.089 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.251 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:31.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:31.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:31.252 Found net devices under 0000:31:00.0: cvl_0_0 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:31.252 Found net devices under 0000:31:00.1: cvl_0_1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:12:31.252 00:12:31.252 --- 10.0.0.2 ping statistics --- 00:12:31.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.252 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:12:31.252 00:12:31.252 --- 10.0.0.1 ping statistics --- 00:12:31.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.252 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=198249 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 198249 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 198249 ']' 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.252 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.252 [2024-10-08 17:28:22.504405] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:31.252 [2024-10-08 17:28:22.504468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.252 [2024-10-08 17:28:22.595168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.252 [2024-10-08 17:28:22.691049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.252 [2024-10-08 17:28:22.691115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.252 [2024-10-08 17:28:22.691124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.253 [2024-10-08 17:28:22.691131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.253 [2024-10-08 17:28:22.691138] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.253 [2024-10-08 17:28:22.693624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.253 [2024-10-08 17:28:22.693785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.253 [2024-10-08 17:28:22.693943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.253 [2024-10-08 17:28:22.693945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.522 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.523 [2024-10-08 17:28:23.453095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.523 Malloc0 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.523 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.787 [2024-10-08 17:28:23.536074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=198400 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=198403 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:31.787 { 00:12:31.787 "params": { 00:12:31.787 "name": "Nvme$subsystem", 00:12:31.787 "trtype": "$TEST_TRANSPORT", 00:12:31.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.787 "adrfam": "ipv4", 00:12:31.787 "trsvcid": "$NVMF_PORT", 00:12:31.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.787 "hdgst": ${hdgst:-false}, 00:12:31.787 "ddgst": ${ddgst:-false} 00:12:31.787 }, 00:12:31.787 "method": "bdev_nvme_attach_controller" 00:12:31.787 } 00:12:31.787 EOF 00:12:31.787 )") 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=198406 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:31.787 { 00:12:31.787 "params": { 00:12:31.787 "name": "Nvme$subsystem", 00:12:31.787 "trtype": "$TEST_TRANSPORT", 00:12:31.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.787 "adrfam": "ipv4", 00:12:31.787 "trsvcid": "$NVMF_PORT", 00:12:31.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.787 "hdgst": ${hdgst:-false}, 00:12:31.787 "ddgst": ${ddgst:-false} 00:12:31.787 }, 00:12:31.787 "method": "bdev_nvme_attach_controller" 00:12:31.787 } 00:12:31.787 EOF 00:12:31.787 )") 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=198410 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:31.787 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:31.787 { 00:12:31.787 "params": { 00:12:31.787 "name": "Nvme$subsystem", 00:12:31.787 "trtype": "$TEST_TRANSPORT", 00:12:31.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.787 "adrfam": "ipv4", 00:12:31.787 "trsvcid": "$NVMF_PORT", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.788 "hdgst": ${hdgst:-false}, 00:12:31.788 "ddgst": ${ddgst:-false} 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 } 00:12:31.788 EOF 00:12:31.788 )") 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:31.788 { 00:12:31.788 "params": { 00:12:31.788 "name": "Nvme$subsystem", 00:12:31.788 "trtype": "$TEST_TRANSPORT", 00:12:31.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.788 "adrfam": "ipv4", 00:12:31.788 "trsvcid": "$NVMF_PORT", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.788 "hdgst": ${hdgst:-false}, 00:12:31.788 "ddgst": ${ddgst:-false} 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 } 00:12:31.788 EOF 00:12:31.788 )") 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 198400 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:31.788 "params": { 00:12:31.788 "name": "Nvme1", 00:12:31.788 "trtype": "tcp", 00:12:31.788 "traddr": "10.0.0.2", 00:12:31.788 "adrfam": "ipv4", 00:12:31.788 "trsvcid": "4420", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.788 "hdgst": false, 00:12:31.788 "ddgst": false 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 }' 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:31.788 "params": { 00:12:31.788 "name": "Nvme1", 00:12:31.788 "trtype": "tcp", 00:12:31.788 "traddr": "10.0.0.2", 00:12:31.788 "adrfam": "ipv4", 00:12:31.788 "trsvcid": "4420", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.788 "hdgst": false, 00:12:31.788 "ddgst": false 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 }' 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:31.788 "params": { 00:12:31.788 "name": "Nvme1", 00:12:31.788 "trtype": "tcp", 00:12:31.788 "traddr": "10.0.0.2", 00:12:31.788 "adrfam": "ipv4", 00:12:31.788 "trsvcid": "4420", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.788 "hdgst": false, 00:12:31.788 "ddgst": false 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 }' 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:12:31.788 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:31.788 "params": { 00:12:31.788 "name": "Nvme1", 00:12:31.788 "trtype": "tcp", 00:12:31.788 "traddr": "10.0.0.2", 00:12:31.788 "adrfam": "ipv4", 00:12:31.788 "trsvcid": "4420", 00:12:31.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.788 "hdgst": false, 00:12:31.788 "ddgst": false 00:12:31.788 }, 00:12:31.788 "method": "bdev_nvme_attach_controller" 00:12:31.788 }' 00:12:31.788 [2024-10-08 17:28:23.592792] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:31.788 [2024-10-08 17:28:23.592863] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:31.788 [2024-10-08 17:28:23.596559] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:31.788 [2024-10-08 17:28:23.596625] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:31.788 [2024-10-08 17:28:23.598468] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:31.788 [2024-10-08 17:28:23.598534] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:31.788 [2024-10-08 17:28:23.601696] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:31.788 [2024-10-08 17:28:23.601762] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:32.049 [2024-10-08 17:28:23.806673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.049 [2024-10-08 17:28:23.878915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.049 [2024-10-08 17:28:23.901285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.049 [2024-10-08 17:28:23.966441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.049 [2024-10-08 17:28:23.969109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.050 [2024-10-08 17:28:24.032586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.050 [2024-10-08 17:28:24.032704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.311 [2024-10-08 17:28:24.103809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:12:32.311 Running I/O for 1 seconds... 00:12:32.311 Running I/O for 1 seconds... 00:12:32.573 Running I/O for 1 seconds... 00:12:32.573 Running I/O for 1 seconds... 00:12:33.519 8911.00 IOPS, 34.81 MiB/s 00:12:33.520 Latency(us) 00:12:33.520 [2024-10-08T15:28:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.520 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:33.520 Nvme1n1 : 1.02 8914.03 34.82 0.00 0.00 14253.44 7645.87 29054.29 00:12:33.520 [2024-10-08T15:28:25.512Z] =================================================================================================================== 00:12:33.520 [2024-10-08T15:28:25.512Z] Total : 8914.03 34.82 0.00 0.00 14253.44 7645.87 29054.29 00:12:33.520 187760.00 IOPS, 733.44 MiB/s 00:12:33.520 Latency(us) 00:12:33.520 [2024-10-08T15:28:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.520 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:33.520 Nvme1n1 : 1.00 187381.62 731.96 0.00 0.00 679.00 324.27 2007.04 00:12:33.520 [2024-10-08T15:28:25.512Z] =================================================================================================================== 00:12:33.520 [2024-10-08T15:28:25.512Z] Total : 187381.62 731.96 0.00 0.00 679.00 324.27 2007.04 00:12:33.520 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 198403 00:12:33.520 11169.00 IOPS, 43.63 MiB/s 00:12:33.520 Latency(us) 00:12:33.520 [2024-10-08T15:28:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.520 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:33.520 Nvme1n1 : 1.01 11209.14 43.79 0.00 0.00 11373.18 6307.84 21408.43 00:12:33.520 [2024-10-08T15:28:25.512Z] =================================================================================================================== 00:12:33.520 [2024-10-08T15:28:25.512Z] Total : 11209.14 43.79 0.00 0.00 11373.18 6307.84 21408.43 00:12:33.781 8790.00 IOPS, 34.34 MiB/s 00:12:33.781 Latency(us) 00:12:33.781 [2024-10-08T15:28:25.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.781 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:33.781 Nvme1n1 : 1.01 8911.15 34.81 0.00 0.00 14326.07 3822.93 40413.87 00:12:33.781 [2024-10-08T15:28:25.773Z] =================================================================================================================== 00:12:33.781 [2024-10-08T15:28:25.773Z] Total : 8911.15 34.81 0.00 0.00 14326.07 3822.93 40413.87 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 198406 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 198410 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.781 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.781 rmmod nvme_tcp 00:12:33.781 rmmod nvme_fabrics 00:12:34.042 rmmod nvme_keyring 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 198249 ']' 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 198249 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 198249 ']' 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 198249 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 198249 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 198249' 00:12:34.042 killing process with pid 198249 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 198249 00:12:34.042 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 198249 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.042 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.591 00:12:36.591 real 0m13.494s 00:12:36.591 user 0m20.859s 00:12:36.591 sys 0m7.737s 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.591 ************************************ 00:12:36.591 END TEST nvmf_bdev_io_wait 00:12:36.591 ************************************ 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.591 ************************************ 00:12:36.591 START TEST nvmf_queue_depth 00:12:36.591 ************************************ 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:36.591 * Looking for test storage... 00:12:36.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:36.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.591 --rc genhtml_branch_coverage=1 00:12:36.591 --rc genhtml_function_coverage=1 00:12:36.591 --rc genhtml_legend=1 00:12:36.591 --rc geninfo_all_blocks=1 00:12:36.591 --rc geninfo_unexecuted_blocks=1 00:12:36.591 00:12:36.591 ' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:36.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.591 --rc genhtml_branch_coverage=1 00:12:36.591 --rc genhtml_function_coverage=1 00:12:36.591 --rc genhtml_legend=1 00:12:36.591 --rc geninfo_all_blocks=1 00:12:36.591 --rc geninfo_unexecuted_blocks=1 00:12:36.591 00:12:36.591 ' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:36.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.591 --rc genhtml_branch_coverage=1 00:12:36.591 --rc genhtml_function_coverage=1 00:12:36.591 --rc genhtml_legend=1 00:12:36.591 --rc geninfo_all_blocks=1 00:12:36.591 --rc geninfo_unexecuted_blocks=1 00:12:36.591 00:12:36.591 ' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:36.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.591 --rc genhtml_branch_coverage=1 00:12:36.591 --rc genhtml_function_coverage=1 00:12:36.591 --rc genhtml_legend=1 00:12:36.591 --rc geninfo_all_blocks=1 00:12:36.591 --rc geninfo_unexecuted_blocks=1 00:12:36.591 00:12:36.591 ' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.591 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.592 17:28:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.737 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:44.738 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:44.738 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:44.738 Found net devices under 0000:31:00.0: cvl_0_0 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:44.738 Found net devices under 0000:31:00.1: cvl_0_1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:12:44.738 00:12:44.738 --- 10.0.0.2 ping statistics --- 00:12:44.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.738 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:12:44.738 17:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:12:44.738 00:12:44.738 --- 10.0.0.1 ping statistics --- 00:12:44.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.739 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=203327 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 203327 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 203327 ']' 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.739 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.739 [2024-10-08 17:28:36.123073] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:44.739 [2024-10-08 17:28:36.123139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.739 [2024-10-08 17:28:36.216939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.739 [2024-10-08 17:28:36.309205] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.739 [2024-10-08 17:28:36.309266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.739 [2024-10-08 17:28:36.309275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.739 [2024-10-08 17:28:36.309282] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.739 [2024-10-08 17:28:36.309288] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.739 [2024-10-08 17:28:36.310148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.000 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.263 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.263 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 [2024-10-08 17:28:36.998913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 Malloc0 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 [2024-10-08 17:28:37.067910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=203406 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 203406 /var/tmp/bdevperf.sock 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 203406 ']' 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.263 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 [2024-10-08 17:28:37.124969] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:12:45.263 [2024-10-08 17:28:37.125045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203406 ] 00:12:45.263 [2024-10-08 17:28:37.210141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.525 [2024-10-08 17:28:37.310799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.097 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.097 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:46.097 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.097 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.097 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.358 NVMe0n1 00:12:46.358 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.358 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:46.358 Running I/O for 10 seconds... 00:12:48.686 11264.00 IOPS, 44.00 MiB/s [2024-10-08T15:28:41.621Z] 11274.00 IOPS, 44.04 MiB/s [2024-10-08T15:28:42.565Z] 11449.33 IOPS, 44.72 MiB/s [2024-10-08T15:28:43.508Z] 11798.50 IOPS, 46.09 MiB/s [2024-10-08T15:28:44.451Z] 12082.80 IOPS, 47.20 MiB/s [2024-10-08T15:28:45.392Z] 12294.00 IOPS, 48.02 MiB/s [2024-10-08T15:28:46.337Z] 12502.14 IOPS, 48.84 MiB/s [2024-10-08T15:28:47.722Z] 12671.00 IOPS, 49.50 MiB/s [2024-10-08T15:28:48.294Z] 12774.00 IOPS, 49.90 MiB/s [2024-10-08T15:28:48.555Z] 12897.10 IOPS, 50.38 MiB/s 00:12:56.563 Latency(us) 00:12:56.563 [2024-10-08T15:28:48.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.563 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:56.563 Verification LBA range: start 0x0 length 0x4000 00:12:56.563 NVMe0n1 : 10.06 12923.12 50.48 0.00 0.00 78982.95 24576.00 64662.19 00:12:56.563 [2024-10-08T15:28:48.555Z] =================================================================================================================== 00:12:56.563 [2024-10-08T15:28:48.555Z] Total : 12923.12 50.48 0.00 0.00 78982.95 24576.00 64662.19 00:12:56.563 { 00:12:56.563 "results": [ 00:12:56.563 { 00:12:56.563 "job": "NVMe0n1", 00:12:56.563 "core_mask": "0x1", 00:12:56.563 "workload": "verify", 00:12:56.563 "status": "finished", 00:12:56.563 "verify_range": { 00:12:56.563 "start": 0, 00:12:56.563 "length": 16384 00:12:56.563 }, 00:12:56.563 "queue_depth": 1024, 00:12:56.563 "io_size": 4096, 00:12:56.563 "runtime": 10.057092, 00:12:56.563 "iops": 12923.119327137507, 00:12:56.563 "mibps": 50.480934871630886, 00:12:56.563 "io_failed": 0, 00:12:56.563 "io_timeout": 0, 00:12:56.563 "avg_latency_us": 78982.94738304261, 00:12:56.563 "min_latency_us": 24576.0, 00:12:56.563 "max_latency_us": 64662.18666666667 00:12:56.563 } 00:12:56.563 ], 00:12:56.563 "core_count": 1 00:12:56.563 } 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 203406 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 203406 ']' 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 203406 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 203406 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 203406' 00:12:56.563 killing process with pid 203406 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 203406 00:12:56.563 Received shutdown signal, test time was about 10.000000 seconds 00:12:56.563 00:12:56.563 Latency(us) 00:12:56.563 [2024-10-08T15:28:48.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.563 [2024-10-08T15:28:48.555Z] =================================================================================================================== 00:12:56.563 [2024-10-08T15:28:48.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 203406 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.563 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.823 rmmod nvme_tcp 00:12:56.823 rmmod nvme_fabrics 00:12:56.823 rmmod nvme_keyring 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 203327 ']' 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 203327 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 203327 ']' 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 203327 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 203327 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 203327' 00:12:56.823 killing process with pid 203327 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 203327 00:12:56.823 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 203327 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.084 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.997 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.997 00:12:58.997 real 0m22.730s 00:12:58.997 user 0m25.978s 00:12:58.997 sys 0m7.122s 00:12:58.997 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:58.998 ************************************ 00:12:58.998 END TEST nvmf_queue_depth 00:12:58.998 ************************************ 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.998 ************************************ 00:12:58.998 START TEST nvmf_target_multipath 00:12:58.998 ************************************ 00:12:58.998 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:59.259 * Looking for test storage... 00:12:59.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.259 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:59.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.260 --rc genhtml_branch_coverage=1 00:12:59.260 --rc genhtml_function_coverage=1 00:12:59.260 --rc genhtml_legend=1 00:12:59.260 --rc geninfo_all_blocks=1 00:12:59.260 --rc geninfo_unexecuted_blocks=1 00:12:59.260 00:12:59.260 ' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:59.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.260 --rc genhtml_branch_coverage=1 00:12:59.260 --rc genhtml_function_coverage=1 00:12:59.260 --rc genhtml_legend=1 00:12:59.260 --rc geninfo_all_blocks=1 00:12:59.260 --rc geninfo_unexecuted_blocks=1 00:12:59.260 00:12:59.260 ' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:59.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.260 --rc genhtml_branch_coverage=1 00:12:59.260 --rc genhtml_function_coverage=1 00:12:59.260 --rc genhtml_legend=1 00:12:59.260 --rc geninfo_all_blocks=1 00:12:59.260 --rc geninfo_unexecuted_blocks=1 00:12:59.260 00:12:59.260 ' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:59.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.260 --rc genhtml_branch_coverage=1 00:12:59.260 --rc genhtml_function_coverage=1 00:12:59.260 --rc genhtml_legend=1 00:12:59.260 --rc geninfo_all_blocks=1 00:12:59.260 --rc geninfo_unexecuted_blocks=1 00:12:59.260 00:12:59.260 ' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.260 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:07.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:07.405 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.405 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:07.406 Found net devices under 0000:31:00.0: cvl_0_0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:07.406 Found net devices under 0000:31:00.1: cvl_0_1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:07.406 00:13:07.406 --- 10.0.0.2 ping statistics --- 00:13:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.406 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:13:07.406 00:13:07.406 --- 10.0.0.1 ping statistics --- 00:13:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.406 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:07.406 only one NIC for nvmf test 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.406 rmmod nvme_tcp 00:13:07.406 rmmod nvme_fabrics 00:13:07.406 rmmod nvme_keyring 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.406 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.323 00:13:09.323 real 0m10.110s 00:13:09.323 user 0m2.239s 00:13:09.323 sys 0m5.796s 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:09.323 ************************************ 00:13:09.323 END TEST nvmf_target_multipath 00:13:09.323 ************************************ 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:09.323 ************************************ 00:13:09.323 START TEST nvmf_zcopy 00:13:09.323 ************************************ 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:09.323 * Looking for test storage... 00:13:09.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:13:09.323 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:09.584 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.585 --rc genhtml_branch_coverage=1 00:13:09.585 --rc genhtml_function_coverage=1 00:13:09.585 --rc genhtml_legend=1 00:13:09.585 --rc geninfo_all_blocks=1 00:13:09.585 --rc geninfo_unexecuted_blocks=1 00:13:09.585 00:13:09.585 ' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.585 --rc genhtml_branch_coverage=1 00:13:09.585 --rc genhtml_function_coverage=1 00:13:09.585 --rc genhtml_legend=1 00:13:09.585 --rc geninfo_all_blocks=1 00:13:09.585 --rc geninfo_unexecuted_blocks=1 00:13:09.585 00:13:09.585 ' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.585 --rc genhtml_branch_coverage=1 00:13:09.585 --rc genhtml_function_coverage=1 00:13:09.585 --rc genhtml_legend=1 00:13:09.585 --rc geninfo_all_blocks=1 00:13:09.585 --rc geninfo_unexecuted_blocks=1 00:13:09.585 00:13:09.585 ' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:09.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.585 --rc genhtml_branch_coverage=1 00:13:09.585 --rc genhtml_function_coverage=1 00:13:09.585 --rc genhtml_legend=1 00:13:09.585 --rc geninfo_all_blocks=1 00:13:09.585 --rc geninfo_unexecuted_blocks=1 00:13:09.585 00:13:09.585 ' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.585 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:17.731 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:17.731 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:17.731 Found net devices under 0000:31:00.0: cvl_0_0 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:17.731 Found net devices under 0000:31:00.1: cvl_0_1 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.731 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.732 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:13:17.732 00:13:17.732 --- 10.0.0.2 ping statistics --- 00:13:17.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.732 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:17.732 00:13:17.732 --- 10.0.0.1 ping statistics --- 00:13:17.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.732 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=214544 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 214544 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 214544 ']' 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.732 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.732 [2024-10-08 17:29:09.127380] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:13:17.732 [2024-10-08 17:29:09.127446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.732 [2024-10-08 17:29:09.197992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.732 [2024-10-08 17:29:09.290238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.732 [2024-10-08 17:29:09.290293] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.732 [2024-10-08 17:29:09.290302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.732 [2024-10-08 17:29:09.290310] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.732 [2024-10-08 17:29:09.290316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.732 [2024-10-08 17:29:09.291122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.993 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.993 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:17.993 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:17.993 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.993 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.255 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:18.255 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:18.255 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 [2024-10-08 17:29:09.999660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 [2024-10-08 17:29:10.023967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 malloc0 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:18.255 { 00:13:18.255 "params": { 00:13:18.255 "name": "Nvme$subsystem", 00:13:18.255 "trtype": "$TEST_TRANSPORT", 00:13:18.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.255 "adrfam": "ipv4", 00:13:18.255 "trsvcid": "$NVMF_PORT", 00:13:18.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.255 "hdgst": ${hdgst:-false}, 00:13:18.255 "ddgst": ${ddgst:-false} 00:13:18.255 }, 00:13:18.255 "method": "bdev_nvme_attach_controller" 00:13:18.255 } 00:13:18.255 EOF 00:13:18.255 )") 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:13:18.255 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:18.255 "params": { 00:13:18.255 "name": "Nvme1", 00:13:18.255 "trtype": "tcp", 00:13:18.255 "traddr": "10.0.0.2", 00:13:18.255 "adrfam": "ipv4", 00:13:18.255 "trsvcid": "4420", 00:13:18.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.255 "hdgst": false, 00:13:18.255 "ddgst": false 00:13:18.255 }, 00:13:18.255 "method": "bdev_nvme_attach_controller" 00:13:18.255 }' 00:13:18.255 [2024-10-08 17:29:10.140175] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:13:18.255 [2024-10-08 17:29:10.140251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214591 ] 00:13:18.255 [2024-10-08 17:29:10.226024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.517 [2024-10-08 17:29:10.324321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.778 Running I/O for 10 seconds... 00:13:20.660 7337.00 IOPS, 57.32 MiB/s [2024-10-08T15:29:13.593Z] 8548.50 IOPS, 66.79 MiB/s [2024-10-08T15:29:14.978Z] 8963.00 IOPS, 70.02 MiB/s [2024-10-08T15:29:15.919Z] 9169.50 IOPS, 71.64 MiB/s [2024-10-08T15:29:16.861Z] 9295.40 IOPS, 72.62 MiB/s [2024-10-08T15:29:17.803Z] 9375.00 IOPS, 73.24 MiB/s [2024-10-08T15:29:18.744Z] 9430.71 IOPS, 73.68 MiB/s [2024-10-08T15:29:19.685Z] 9470.88 IOPS, 73.99 MiB/s [2024-10-08T15:29:20.626Z] 9504.67 IOPS, 74.26 MiB/s [2024-10-08T15:29:20.626Z] 9531.50 IOPS, 74.46 MiB/s 00:13:28.634 Latency(us) 00:13:28.634 [2024-10-08T15:29:20.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:28.634 Verification LBA range: start 0x0 length 0x1000 00:13:28.634 Nvme1n1 : 10.01 9532.11 74.47 0.00 0.00 13382.83 1024.00 27962.03 00:13:28.634 [2024-10-08T15:29:20.626Z] =================================================================================================================== 00:13:28.634 [2024-10-08T15:29:20.626Z] Total : 9532.11 74.47 0.00 0.00 13382.83 1024.00 27962.03 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=216766 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:28.895 { 00:13:28.895 "params": { 00:13:28.895 "name": "Nvme$subsystem", 00:13:28.895 "trtype": "$TEST_TRANSPORT", 00:13:28.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.895 "adrfam": "ipv4", 00:13:28.895 "trsvcid": "$NVMF_PORT", 00:13:28.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.895 "hdgst": ${hdgst:-false}, 00:13:28.895 "ddgst": ${ddgst:-false} 00:13:28.895 }, 00:13:28.895 "method": "bdev_nvme_attach_controller" 00:13:28.895 } 00:13:28.895 EOF 00:13:28.895 )") 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:13:28.895 [2024-10-08 17:29:20.695151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.695182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:13:28.895 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:28.895 "params": { 00:13:28.895 "name": "Nvme1", 00:13:28.895 "trtype": "tcp", 00:13:28.895 "traddr": "10.0.0.2", 00:13:28.895 "adrfam": "ipv4", 00:13:28.895 "trsvcid": "4420", 00:13:28.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.895 "hdgst": false, 00:13:28.895 "ddgst": false 00:13:28.895 }, 00:13:28.895 "method": "bdev_nvme_attach_controller" 00:13:28.895 }' 00:13:28.895 [2024-10-08 17:29:20.707147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.707156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.719175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.719182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.731205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.731212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.738843] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:13:28.895 [2024-10-08 17:29:20.738890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216766 ] 00:13:28.895 [2024-10-08 17:29:20.743237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.743244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.755264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.755272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.767295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.767302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.779327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.779334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.791358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.791366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.803388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.803396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.813629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.895 [2024-10-08 17:29:20.815419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.815426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.827451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.827459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.839481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.839491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.851511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.851524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.863540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.863549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.867601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.895 [2024-10-08 17:29:20.875572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.875579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.895 [2024-10-08 17:29:20.887609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.895 [2024-10-08 17:29:20.887623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.899637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.899647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.911668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.911677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.923699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.923708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.935729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.935736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.947769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.947785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.959792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.959802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.971823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.971832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.983853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.983860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:20.995883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:20.995890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.007917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.007924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.019949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.019960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.031986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.031997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.082434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.082448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.092141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.092150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 Running I/O for 5 seconds... 00:13:29.156 [2024-10-08 17:29:21.107919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.107935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.120552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.156 [2024-10-08 17:29:21.120567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.156 [2024-10-08 17:29:21.133337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.157 [2024-10-08 17:29:21.133352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.157 [2024-10-08 17:29:21.146253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.157 [2024-10-08 17:29:21.146268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.159982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.159997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.172597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.172612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.185957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.185971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.199564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.199578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.212003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.212017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.224539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.224553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.237869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.237884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.251288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.251303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.264027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.264041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.277778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.277793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.291323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.291338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.304650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.304665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.317404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.317418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.329890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.329905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.342315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.342330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.354970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.354989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.368275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.368289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.380830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.380845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.393493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.393507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.418 [2024-10-08 17:29:21.407068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.418 [2024-10-08 17:29:21.407083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.419757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.419771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.432666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.432680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.445317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.445331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.458364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.458378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.471791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.471805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.484993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.485007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.498333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.498347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.511748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.511763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.524979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.524993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.538388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.538403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.552121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.552135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.565616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.565631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.578914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.578928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.592301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.592315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.605848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.605862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.618945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.618960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.632095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.632109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.645179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.645193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.658388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.658402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.680 [2024-10-08 17:29:21.671358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.680 [2024-10-08 17:29:21.671373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.685041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.685056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.698306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.698320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.711664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.711679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.724982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.724996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.738455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.738470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.752159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.752174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.765294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.765312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.778172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.778187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.791512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.791526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.804246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.804260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.817191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.817205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.830216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.830230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.843487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.843501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.856432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.856446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.869588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.869603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.883067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.883081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.896466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.896480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.909868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.909883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.942 [2024-10-08 17:29:21.922743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.942 [2024-10-08 17:29:21.922758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:21.935978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:21.935993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:21.948760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:21.948774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:21.962473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:21.962488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:21.975081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:21.975095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:21.988275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:21.988289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.001874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.001888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.014778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.014796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.027281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.027295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.039649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.039664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.052766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.052781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.066259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.066273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.079661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.079676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.092240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.092254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 19021.00 IOPS, 148.60 MiB/s [2024-10-08T15:29:22.196Z] [2024-10-08 17:29:22.105149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.105164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.117765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.117779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.131267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.131281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.144512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.144527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.157323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.157339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.169737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.169752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.182045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.182060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.204 [2024-10-08 17:29:22.195537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.204 [2024-10-08 17:29:22.195551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.208956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.208972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.221353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.221367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.234134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.234149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.247049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.247064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.260523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.260544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.273126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.273140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.285816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.285830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.298812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.298827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.311258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.311273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.323985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.324000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.337424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.337438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.350350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.350365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.363375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.363390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.375952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.375967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.389796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.389811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.402364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.402379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.415728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.415743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.428725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.428739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.441348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.441363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.466 [2024-10-08 17:29:22.454679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.466 [2024-10-08 17:29:22.454693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.467144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.467158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.480326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.480340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.493597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.493612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.506406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.506420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.519644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.519659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.532410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.532425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.545777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.545792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.559338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.559353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.572490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.572505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.585597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.585612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.598743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.598757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.612306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.612321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.625754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.625769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.639296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.639311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.651801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.651816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.664600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.664614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.677763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.677777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.691407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.691422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.705072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.705087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.728 [2024-10-08 17:29:22.718658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.728 [2024-10-08 17:29:22.718672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.732183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.732197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.745583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.745598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.758154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.758168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.771571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.771585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.785156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.785170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.798511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.798526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.811055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.811070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.824002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.824016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.837451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.837466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.850354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.850369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.864035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.864049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.877751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.877765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.890526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.890541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.903567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.903582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.917114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.917128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.930014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.930029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.943857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.943871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.956398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.956413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.990 [2024-10-08 17:29:22.969820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.990 [2024-10-08 17:29:22.969835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:22.983547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:22.983562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:22.996079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:22.996093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.009154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.009168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.022260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.022274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.035379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.035393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.048020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.048035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.060548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.060563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.073391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.073406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.085874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.085889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 19134.50 IOPS, 149.49 MiB/s [2024-10-08T15:29:23.245Z] [2024-10-08 17:29:23.099308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.099323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.112655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.112669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.125844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.125858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.139043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.139057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.152408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.152422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.165767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.165781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.178868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.178883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.191564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.191578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.204498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.204512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.217273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.217287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.230184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.253 [2024-10-08 17:29:23.230198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.253 [2024-10-08 17:29:23.243468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.254 [2024-10-08 17:29:23.243486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.256139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.256153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.268785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.268799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.282223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.282238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.295052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.295067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.307650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.307665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.320298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.320312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.332951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.332965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.346640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.346655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.360028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.360042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.373053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.373067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.386701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.386715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.399656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.399671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.413186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.413202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.426964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.426983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.440558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.440573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.453861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.453876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.467483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.467499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.480722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.480736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.493877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.493895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.516 [2024-10-08 17:29:23.506715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.516 [2024-10-08 17:29:23.506730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.519183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.519198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.532624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.532638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.545899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.545914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.559380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.559394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.572583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.572597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.586150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.586164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.599593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.599607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.612720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.612734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.626183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.626197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.778 [2024-10-08 17:29:23.638801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.778 [2024-10-08 17:29:23.638815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.651941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.651956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.665079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.665094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.678398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.678412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.692352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.692367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.705840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.705856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.718863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.718878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.731371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.731385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.744522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.744540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.757166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.757181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.779 [2024-10-08 17:29:23.769820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.779 [2024-10-08 17:29:23.769835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.783223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.783238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.796810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.796824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.810195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.810209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.823224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.823238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.836094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.836110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.849778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.849793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.862841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.862856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.875804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.875818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.888378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.040 [2024-10-08 17:29:23.888392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.040 [2024-10-08 17:29:23.901501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.901516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.914956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.914971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.928516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.928531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.941501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.941515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.954795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.954810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.968350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.968364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.981102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.981117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:23.993923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:23.993941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:24.006617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:24.006632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:24.020069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:24.020084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.041 [2024-10-08 17:29:24.032759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.041 [2024-10-08 17:29:24.032774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.046302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.046317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.058549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.058564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.071217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.071232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.084637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.084652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.098411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.098425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 19161.33 IOPS, 149.70 MiB/s [2024-10-08T15:29:24.295Z] [2024-10-08 17:29:24.112217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.112232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.124476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.124492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.137088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.137103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.150663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.150678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.163365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.163379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.176570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.176585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.189103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.189118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.202171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.202185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.215084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.215099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.228245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.228259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.240835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.240850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.253917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.253932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.266522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.266536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.279214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.279229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.303 [2024-10-08 17:29:24.291855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.303 [2024-10-08 17:29:24.291869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.304698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.304712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.317398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.317413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.330117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.330132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.342366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.342380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.355809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.355823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.368310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.368324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.381580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.381594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.394597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.394612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.407575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.407589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.421026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.421041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.434325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.434339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.447693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.447708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.461197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.461212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.474923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.474938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.487500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.487514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.500425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.500440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.513738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.513752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.526924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.526938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.539300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.539314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.566 [2024-10-08 17:29:24.551918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.566 [2024-10-08 17:29:24.551932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.564700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.564715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.577203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.577217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.590142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.590156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.602934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.602949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.615528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.615543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.628171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.628186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.641804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.641818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.654098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.654112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.667388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.667402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.680369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.680383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.692768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.692782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.705690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.705704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.718461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.718480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.731528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.731542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.744863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.744877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.758260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.758274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.771133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.771147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.784170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.784185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.797293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.797307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.829 [2024-10-08 17:29:24.810380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.829 [2024-10-08 17:29:24.810395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.823309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.823324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.836662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.836678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.849358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.849372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.862660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.862674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.875736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.875750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.889398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.889412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.902778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.902793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.915826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.915840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.928980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.928994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.942054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.942068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.955227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.955242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.968527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.968545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.981590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.981604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:24.994309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:24.994324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.007666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.007680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.021197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.021212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.034641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.034656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.047999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.048013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.061323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.061337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.091 [2024-10-08 17:29:25.074538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.091 [2024-10-08 17:29:25.074552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.087960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.087978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.101155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.101170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 19186.50 IOPS, 149.89 MiB/s [2024-10-08T15:29:25.344Z] [2024-10-08 17:29:25.114878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.114893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.128089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.128104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.141577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.141592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.154962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.154981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.168321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.168336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.181408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.181423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.194403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.194418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.207963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.207981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.221416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.221434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.234222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.234237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.247657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.247672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.261014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.261028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.274490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.274504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.286798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.286812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.299219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.299233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.312373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.312387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.325078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.325092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.352 [2024-10-08 17:29:25.338621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.352 [2024-10-08 17:29:25.338636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.351345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.351359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.364932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.364947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.378559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.378574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.391661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.391676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.404940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.404954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.417387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.417402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.429788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.429803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.442783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.442798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.456098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.456113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.469563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.469579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.482020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.482034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.494937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.494952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.508497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.508512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.522076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.522090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.535074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.535088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.547095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.547109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.559744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.559759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.572708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.572723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.585319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.585334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.613 [2024-10-08 17:29:25.598096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.613 [2024-10-08 17:29:25.598111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.610793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.610807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.624209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.624224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.637716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.637730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.650341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.650355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.663412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.663426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.677110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.677124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.690340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.690355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.702930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.702945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.715458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.715473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.729100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.729115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.742620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.742635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.755163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.755177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.768298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.768313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.781591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.781606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.795047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.795061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.808813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.808828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.821135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.821149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.833588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.833602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.846807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.846822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.872 [2024-10-08 17:29:25.860364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.872 [2024-10-08 17:29:25.860378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.132 [2024-10-08 17:29:25.872934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.872948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.885384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.885398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.898510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.898524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.911803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.911817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.924927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.924941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.937712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.937726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.951361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.951375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.964352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.964367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.977140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.977155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:25.990365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:25.990380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.003759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.003773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.017176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.017190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.030197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.030211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.043608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.043623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.056807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.056821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.070298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.070313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.083194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.083209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 [2024-10-08 17:29:26.096507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.096521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 19204.60 IOPS, 150.04 MiB/s [2024-10-08T15:29:26.125Z] [2024-10-08 17:29:26.108769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.108783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.133 00:13:34.133 Latency(us) 00:13:34.133 [2024-10-08T15:29:26.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.133 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:34.133 Nvme1n1 : 5.01 19208.36 150.07 0.00 0.00 6657.75 3072.00 18568.53 00:13:34.133 [2024-10-08T15:29:26.125Z] =================================================================================================================== 00:13:34.133 [2024-10-08T15:29:26.125Z] Total : 19208.36 150.07 0.00 0.00 6657.75 3072.00 18568.53 00:13:34.133 [2024-10-08 17:29:26.118525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.133 [2024-10-08 17:29:26.118538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.393 [2024-10-08 17:29:26.130561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.393 [2024-10-08 17:29:26.130575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.393 [2024-10-08 17:29:26.142589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.393 [2024-10-08 17:29:26.142602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.393 [2024-10-08 17:29:26.154620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.393 [2024-10-08 17:29:26.154638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.393 [2024-10-08 17:29:26.166646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.393 [2024-10-08 17:29:26.166658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 [2024-10-08 17:29:26.178677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.394 [2024-10-08 17:29:26.178687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 [2024-10-08 17:29:26.190711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.394 [2024-10-08 17:29:26.190722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 [2024-10-08 17:29:26.202741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.394 [2024-10-08 17:29:26.202751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 [2024-10-08 17:29:26.214771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.394 [2024-10-08 17:29:26.214781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 [2024-10-08 17:29:26.226800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.394 [2024-10-08 17:29:26.226808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (216766) - No such process 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 216766 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 delay0 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.394 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:34.654 [2024-10-08 17:29:26.422220] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.239 Initializing NVMe Controllers 00:13:41.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:41.239 Initialization complete. Launching workers. 00:13:41.239 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 161 00:13:41.239 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 448, failed to submit 33 00:13:41.239 success 272, unsuccessful 176, failed 0 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.239 rmmod nvme_tcp 00:13:41.239 rmmod nvme_fabrics 00:13:41.239 rmmod nvme_keyring 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 214544 ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 214544 ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 214544' 00:13:41.239 killing process with pid 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 214544 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.239 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.153 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.153 00:13:43.153 real 0m33.718s 00:13:43.153 user 0m45.380s 00:13:43.153 sys 0m10.209s 00:13:43.153 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.153 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.154 ************************************ 00:13:43.154 END TEST nvmf_zcopy 00:13:43.154 ************************************ 00:13:43.154 17:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.154 17:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.154 17:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.154 17:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:43.154 ************************************ 00:13:43.154 START TEST nvmf_nmic 00:13:43.154 ************************************ 00:13:43.154 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.154 * Looking for test storage... 00:13:43.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.154 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:43.154 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:13:43.154 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:43.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.415 --rc genhtml_branch_coverage=1 00:13:43.415 --rc genhtml_function_coverage=1 00:13:43.415 --rc genhtml_legend=1 00:13:43.415 --rc geninfo_all_blocks=1 00:13:43.415 --rc geninfo_unexecuted_blocks=1 00:13:43.415 00:13:43.415 ' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:43.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.415 --rc genhtml_branch_coverage=1 00:13:43.415 --rc genhtml_function_coverage=1 00:13:43.415 --rc genhtml_legend=1 00:13:43.415 --rc geninfo_all_blocks=1 00:13:43.415 --rc geninfo_unexecuted_blocks=1 00:13:43.415 00:13:43.415 ' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:43.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.415 --rc genhtml_branch_coverage=1 00:13:43.415 --rc genhtml_function_coverage=1 00:13:43.415 --rc genhtml_legend=1 00:13:43.415 --rc geninfo_all_blocks=1 00:13:43.415 --rc geninfo_unexecuted_blocks=1 00:13:43.415 00:13:43.415 ' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:43.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.415 --rc genhtml_branch_coverage=1 00:13:43.415 --rc genhtml_function_coverage=1 00:13:43.415 --rc genhtml_legend=1 00:13:43.415 --rc geninfo_all_blocks=1 00:13:43.415 --rc geninfo_unexecuted_blocks=1 00:13:43.415 00:13:43.415 ' 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.415 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.416 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:51.559 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:51.559 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:51.559 Found net devices under 0000:31:00.0: cvl_0_0 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:51.559 Found net devices under 0000:31:00.1: cvl_0_1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:51.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:13:51.559 00:13:51.559 --- 10.0.0.2 ping statistics --- 00:13:51.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.559 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:13:51.559 00:13:51.559 --- 10.0.0.1 ping statistics --- 00:13:51.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.559 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:51.559 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=223480 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 223480 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 223480 ']' 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.560 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.560 [2024-10-08 17:29:42.974410] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:13:51.560 [2024-10-08 17:29:42.974474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.560 [2024-10-08 17:29:43.064661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.560 [2024-10-08 17:29:43.162690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.560 [2024-10-08 17:29:43.162749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.560 [2024-10-08 17:29:43.162758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.560 [2024-10-08 17:29:43.162766] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.560 [2024-10-08 17:29:43.162772] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.560 [2024-10-08 17:29:43.165327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.560 [2024-10-08 17:29:43.165489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.560 [2024-10-08 17:29:43.165648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.560 [2024-10-08 17:29:43.165647] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.820 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.820 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:51.821 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:51.821 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.821 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 [2024-10-08 17:29:43.848232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 Malloc0 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 [2024-10-08 17:29:43.913939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:52.082 test case1: single bdev can't be used in multiple subsystems 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 [2024-10-08 17:29:43.949730] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:52.082 [2024-10-08 17:29:43.949758] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:52.082 [2024-10-08 17:29:43.949767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.082 request: 00:13:52.082 { 00:13:52.082 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:52.082 "namespace": { 00:13:52.082 "bdev_name": "Malloc0", 00:13:52.082 "no_auto_visible": false 00:13:52.082 }, 00:13:52.082 "method": "nvmf_subsystem_add_ns", 00:13:52.082 "req_id": 1 00:13:52.082 } 00:13:52.082 Got JSON-RPC error response 00:13:52.082 response: 00:13:52.082 { 00:13:52.082 "code": -32602, 00:13:52.082 "message": "Invalid parameters" 00:13:52.082 } 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:52.082 Adding namespace failed - expected result. 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:52.082 test case2: host connect to nvmf target in multiple paths 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 [2024-10-08 17:29:43.961937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 17:29:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.997 17:29:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:55.384 17:29:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.384 17:29:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.384 17:29:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.384 17:29:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:55.384 17:29:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:57.297 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:57.297 [global] 00:13:57.297 thread=1 00:13:57.297 invalidate=1 00:13:57.297 rw=write 00:13:57.297 time_based=1 00:13:57.297 runtime=1 00:13:57.297 ioengine=libaio 00:13:57.297 direct=1 00:13:57.297 bs=4096 00:13:57.297 iodepth=1 00:13:57.297 norandommap=0 00:13:57.297 numjobs=1 00:13:57.297 00:13:57.297 verify_dump=1 00:13:57.297 verify_backlog=512 00:13:57.297 verify_state_save=0 00:13:57.297 do_verify=1 00:13:57.297 verify=crc32c-intel 00:13:57.297 [job0] 00:13:57.297 filename=/dev/nvme0n1 00:13:57.297 Could not set queue depth (nvme0n1) 00:13:57.888 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.888 fio-3.35 00:13:57.888 Starting 1 thread 00:13:58.831 00:13:58.831 job0: (groupid=0, jobs=1): err= 0: pid=224944: Tue Oct 8 17:29:50 2024 00:13:58.831 read: IOPS=820, BW=3281KiB/s (3359kB/s)(3284KiB/1001msec) 00:13:58.831 slat (nsec): min=6739, max=67705, avg=22475.40, stdev=8175.70 00:13:58.831 clat (usec): min=241, max=1242, avg=678.16, stdev=152.34 00:13:58.831 lat (usec): min=251, max=1268, avg=700.63, stdev=154.71 00:13:58.831 clat percentiles (usec): 00:13:58.831 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[ 453], 20.00th=[ 553], 00:13:58.831 | 30.00th=[ 619], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 742], 00:13:58.831 | 70.00th=[ 775], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:13:58.831 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1237], 99.95th=[ 1237], 00:13:58.831 | 99.99th=[ 1237] 00:13:58.831 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:58.831 slat (usec): min=9, max=27556, avg=52.06, stdev=860.42 00:13:58.831 clat (usec): min=108, max=715, avg=351.02, stdev=124.05 00:13:58.831 lat (usec): min=119, max=28065, avg=403.09, stdev=874.95 00:13:58.831 clat percentiles (usec): 00:13:58.831 | 1.00th=[ 116], 5.00th=[ 123], 10.00th=[ 202], 20.00th=[ 237], 00:13:58.831 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 388], 00:13:58.831 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 519], 95.00th=[ 562], 00:13:58.831 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 717], 00:13:58.831 | 99.99th=[ 717] 00:13:58.831 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.831 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.831 lat (usec) : 250=12.41%, 500=41.63%, 750=29.86%, 1000=16.04% 00:13:58.831 lat (msec) : 2=0.05% 00:13:58.831 cpu : usr=2.40%, sys=4.60%, ctx=1849, majf=0, minf=1 00:13:58.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.831 issued rwts: total=821,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.831 00:13:58.831 Run status group 0 (all jobs): 00:13:58.831 READ: bw=3281KiB/s (3359kB/s), 3281KiB/s-3281KiB/s (3359kB/s-3359kB/s), io=3284KiB (3363kB), run=1001-1001msec 00:13:58.831 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:13:58.831 00:13:58.831 Disk stats (read/write): 00:13:58.831 nvme0n1: ios=712/1024, merge=0/0, ticks=1428/346, in_queue=1774, util=98.70% 00:13:58.832 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:59.092 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.092 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:59.092 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:59.092 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.092 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:59.092 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.093 rmmod nvme_tcp 00:13:59.093 rmmod nvme_fabrics 00:13:59.093 rmmod nvme_keyring 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 223480 ']' 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 223480 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 223480 ']' 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 223480 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.093 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 223480 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 223480' 00:13:59.354 killing process with pid 223480 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 223480 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 223480 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.354 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.900 00:14:01.900 real 0m18.375s 00:14:01.900 user 0m48.899s 00:14:01.900 sys 0m6.990s 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:01.900 ************************************ 00:14:01.900 END TEST nvmf_nmic 00:14:01.900 ************************************ 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:01.900 ************************************ 00:14:01.900 START TEST nvmf_fio_target 00:14:01.900 ************************************ 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.900 * Looking for test storage... 00:14:01.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.900 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.901 --rc genhtml_branch_coverage=1 00:14:01.901 --rc genhtml_function_coverage=1 00:14:01.901 --rc genhtml_legend=1 00:14:01.901 --rc geninfo_all_blocks=1 00:14:01.901 --rc geninfo_unexecuted_blocks=1 00:14:01.901 00:14:01.901 ' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.901 --rc genhtml_branch_coverage=1 00:14:01.901 --rc genhtml_function_coverage=1 00:14:01.901 --rc genhtml_legend=1 00:14:01.901 --rc geninfo_all_blocks=1 00:14:01.901 --rc geninfo_unexecuted_blocks=1 00:14:01.901 00:14:01.901 ' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.901 --rc genhtml_branch_coverage=1 00:14:01.901 --rc genhtml_function_coverage=1 00:14:01.901 --rc genhtml_legend=1 00:14:01.901 --rc geninfo_all_blocks=1 00:14:01.901 --rc geninfo_unexecuted_blocks=1 00:14:01.901 00:14:01.901 ' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:01.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.901 --rc genhtml_branch_coverage=1 00:14:01.901 --rc genhtml_function_coverage=1 00:14:01.901 --rc genhtml_legend=1 00:14:01.901 --rc geninfo_all_blocks=1 00:14:01.901 --rc geninfo_unexecuted_blocks=1 00:14:01.901 00:14:01.901 ' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:01.901 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.902 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:10.047 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:10.047 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:10.047 Found net devices under 0000:31:00.0: cvl_0_0 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:10.047 Found net devices under 0000:31:00.1: cvl_0_1 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.047 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.048 17:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:14:10.048 00:14:10.048 --- 10.0.0.2 ping statistics --- 00:14:10.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.048 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:14:10.048 00:14:10.048 --- 10.0.0.1 ping statistics --- 00:14:10.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.048 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=229695 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 229695 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 229695 ']' 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.048 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.048 [2024-10-08 17:30:01.371619] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:14:10.048 [2024-10-08 17:30:01.371682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.048 [2024-10-08 17:30:01.465199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.048 [2024-10-08 17:30:01.561238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.048 [2024-10-08 17:30:01.561299] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.048 [2024-10-08 17:30:01.561308] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.048 [2024-10-08 17:30:01.561316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.048 [2024-10-08 17:30:01.561323] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.048 [2024-10-08 17:30:01.563479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.048 [2024-10-08 17:30:01.563611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.048 [2024-10-08 17:30:01.563757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.048 [2024-10-08 17:30:01.563758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.310 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.571 [2024-10-08 17:30:02.444232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.571 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:10.832 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:10.832 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.094 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:11.094 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.355 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:11.355 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.355 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:11.355 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:11.616 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.877 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:11.877 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:12.138 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:12.138 17:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:12.398 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:12.398 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:12.398 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.658 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:12.658 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.919 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:12.919 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:13.180 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.180 [2024-10-08 17:30:05.083295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.180 17:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:13.441 17:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:13.701 17:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.085 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:15.086 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:15.086 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.086 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:15.086 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:15.086 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:16.998 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:17.258 [global] 00:14:17.258 thread=1 00:14:17.258 invalidate=1 00:14:17.258 rw=write 00:14:17.258 time_based=1 00:14:17.258 runtime=1 00:14:17.258 ioengine=libaio 00:14:17.258 direct=1 00:14:17.258 bs=4096 00:14:17.258 iodepth=1 00:14:17.258 norandommap=0 00:14:17.258 numjobs=1 00:14:17.258 00:14:17.258 verify_dump=1 00:14:17.258 verify_backlog=512 00:14:17.258 verify_state_save=0 00:14:17.258 do_verify=1 00:14:17.259 verify=crc32c-intel 00:14:17.259 [job0] 00:14:17.259 filename=/dev/nvme0n1 00:14:17.259 [job1] 00:14:17.259 filename=/dev/nvme0n2 00:14:17.259 [job2] 00:14:17.259 filename=/dev/nvme0n3 00:14:17.259 [job3] 00:14:17.259 filename=/dev/nvme0n4 00:14:17.259 Could not set queue depth (nvme0n1) 00:14:17.259 Could not set queue depth (nvme0n2) 00:14:17.259 Could not set queue depth (nvme0n3) 00:14:17.259 Could not set queue depth (nvme0n4) 00:14:17.518 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.518 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.518 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.518 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.518 fio-3.35 00:14:17.518 Starting 4 threads 00:14:18.904 00:14:18.904 job0: (groupid=0, jobs=1): err= 0: pid=231943: Tue Oct 8 17:30:10 2024 00:14:18.904 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:14:18.904 slat (nsec): min=25438, max=30476, avg=26520.00, stdev=1488.83 00:14:18.904 clat (usec): min=862, max=42985, avg=37621.48, stdev=12933.88 00:14:18.904 lat (usec): min=888, max=43015, avg=37648.00, stdev=12934.11 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 865], 5.00th=[ 865], 10.00th=[ 1020], 20.00th=[41157], 00:14:18.904 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:14:18.904 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:14:18.904 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:18.904 | 99.99th=[42730] 00:14:18.904 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:14:18.904 slat (nsec): min=8996, max=52080, avg=32228.19, stdev=7298.06 00:14:18.904 clat (usec): min=129, max=909, avg=533.01, stdev=143.61 00:14:18.904 lat (usec): min=146, max=928, avg=565.24, stdev=145.35 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 229], 5.00th=[ 285], 10.00th=[ 338], 20.00th=[ 400], 00:14:18.904 | 30.00th=[ 457], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 586], 00:14:18.904 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 758], 00:14:18.904 | 99.00th=[ 832], 99.50th=[ 832], 99.90th=[ 906], 99.95th=[ 906], 00:14:18.904 | 99.99th=[ 906] 00:14:18.904 bw ( KiB/s): min= 4096, max= 4096, per=35.55%, avg=4096.00, stdev= 0.00, samples=1 00:14:18.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:18.904 lat (usec) : 250=1.88%, 500=37.85%, 750=50.85%, 1000=6.03% 00:14:18.904 lat (msec) : 2=0.19%, 50=3.20% 00:14:18.904 cpu : usr=1.29%, sys=1.98%, ctx=531, majf=0, minf=1 00:14:18.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.904 job1: (groupid=0, jobs=1): err= 0: pid=231945: Tue Oct 8 17:30:10 2024 00:14:18.904 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:18.904 slat (nsec): min=6961, max=62192, avg=24595.34, stdev=4783.67 00:14:18.904 clat (usec): min=323, max=21503, avg=1030.46, stdev=930.61 00:14:18.904 lat (usec): min=348, max=21528, avg=1055.05, stdev=930.76 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 498], 5.00th=[ 578], 10.00th=[ 725], 20.00th=[ 816], 00:14:18.904 | 30.00th=[ 922], 40.00th=[ 996], 50.00th=[ 1045], 60.00th=[ 1090], 00:14:18.904 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:14:18.904 | 99.00th=[ 1336], 99.50th=[ 1565], 99.90th=[21627], 99.95th=[21627], 00:14:18.904 | 99.99th=[21627] 00:14:18.904 write: IOPS=857, BW=3429KiB/s (3511kB/s)(3432KiB/1001msec); 0 zone resets 00:14:18.904 slat (nsec): min=9557, max=52748, avg=29406.73, stdev=8702.42 00:14:18.904 clat (usec): min=119, max=1859, avg=494.57, stdev=169.42 00:14:18.904 lat (usec): min=152, max=1893, avg=523.97, stdev=170.89 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 153], 5.00th=[ 251], 10.00th=[ 281], 20.00th=[ 338], 00:14:18.904 | 30.00th=[ 383], 40.00th=[ 437], 50.00th=[ 486], 60.00th=[ 537], 00:14:18.904 | 70.00th=[ 586], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 766], 00:14:18.904 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 1860], 99.95th=[ 1860], 00:14:18.904 | 99.99th=[ 1860] 00:14:18.904 bw ( KiB/s): min= 4096, max= 4096, per=35.55%, avg=4096.00, stdev= 0.00, samples=1 00:14:18.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:18.904 lat (usec) : 250=2.85%, 500=30.80%, 750=29.05%, 1000=14.89% 00:14:18.904 lat (msec) : 2=22.26%, 4=0.07%, 50=0.07% 00:14:18.904 cpu : usr=1.70%, sys=4.30%, ctx=1370, majf=0, minf=1 00:14:18.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 issued rwts: total=512,858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.904 job2: (groupid=0, jobs=1): err= 0: pid=231946: Tue Oct 8 17:30:10 2024 00:14:18.904 read: IOPS=875, BW=3500KiB/s (3585kB/s)(3504KiB/1001msec) 00:14:18.904 slat (nsec): min=6712, max=59118, avg=23755.74, stdev=6810.22 00:14:18.904 clat (usec): min=235, max=6272, avg=600.05, stdev=231.56 00:14:18.904 lat (usec): min=261, max=6298, avg=623.81, stdev=231.85 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 310], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 502], 00:14:18.904 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:14:18.904 | 70.00th=[ 660], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 799], 00:14:18.904 | 99.00th=[ 889], 99.50th=[ 1123], 99.90th=[ 6259], 99.95th=[ 6259], 00:14:18.904 | 99.99th=[ 6259] 00:14:18.904 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:18.904 slat (nsec): min=9392, max=63834, avg=28665.60, stdev=9924.19 00:14:18.904 clat (usec): min=105, max=733, avg=400.64, stdev=83.50 00:14:18.904 lat (usec): min=117, max=759, avg=429.31, stdev=84.94 00:14:18.904 clat percentiles (usec): 00:14:18.904 | 1.00th=[ 194], 5.00th=[ 260], 10.00th=[ 293], 20.00th=[ 338], 00:14:18.904 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 404], 60.00th=[ 429], 00:14:18.904 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 494], 95.00th=[ 523], 00:14:18.904 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 734], 00:14:18.904 | 99.99th=[ 734] 00:14:18.904 bw ( KiB/s): min= 4096, max= 4096, per=35.55%, avg=4096.00, stdev= 0.00, samples=1 00:14:18.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:18.904 lat (usec) : 250=2.11%, 500=55.95%, 750=36.11%, 1000=5.58% 00:14:18.904 lat (msec) : 2=0.21%, 10=0.05% 00:14:18.904 cpu : usr=2.50%, sys=5.50%, ctx=1901, majf=0, minf=1 00:14:18.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.904 issued rwts: total=876,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.904 job3: (groupid=0, jobs=1): err= 0: pid=231948: Tue Oct 8 17:30:10 2024 00:14:18.905 read: IOPS=338, BW=1355KiB/s (1387kB/s)(1356KiB/1001msec) 00:14:18.905 slat (nsec): min=6671, max=44066, avg=18669.40, stdev=9476.93 00:14:18.905 clat (usec): min=367, max=41590, avg=2344.74, stdev=8342.59 00:14:18.905 lat (usec): min=393, max=41600, avg=2363.41, stdev=8343.86 00:14:18.905 clat percentiles (usec): 00:14:18.905 | 1.00th=[ 388], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 482], 00:14:18.905 | 30.00th=[ 506], 40.00th=[ 523], 50.00th=[ 545], 60.00th=[ 562], 00:14:18.905 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 807], 00:14:18.905 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:18.905 | 99.99th=[41681] 00:14:18.905 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:18.905 slat (nsec): min=9848, max=50458, avg=25601.89, stdev=10732.59 00:14:18.905 clat (usec): min=102, max=988, avg=353.77, stdev=85.90 00:14:18.905 lat (usec): min=112, max=1022, avg=379.37, stdev=89.57 00:14:18.905 clat percentiles (usec): 00:14:18.905 | 1.00th=[ 116], 5.00th=[ 231], 10.00th=[ 253], 20.00th=[ 281], 00:14:18.905 | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:14:18.905 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:14:18.905 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 988], 99.95th=[ 988], 00:14:18.905 | 99.99th=[ 988] 00:14:18.905 bw ( KiB/s): min= 4096, max= 4096, per=35.55%, avg=4096.00, stdev= 0.00, samples=1 00:14:18.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:18.905 lat (usec) : 250=5.64%, 500=64.63%, 750=27.50%, 1000=0.35% 00:14:18.905 lat (msec) : 10=0.12%, 50=1.76% 00:14:18.905 cpu : usr=0.60%, sys=2.40%, ctx=851, majf=0, minf=1 00:14:18.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.905 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.905 00:14:18.905 Run status group 0 (all jobs): 00:14:18.905 READ: bw=6922KiB/s (7088kB/s), 75.3KiB/s-3500KiB/s (77.1kB/s-3585kB/s), io=6984KiB (7152kB), run=1001-1009msec 00:14:18.905 WRITE: bw=11.2MiB/s (11.8MB/s), 2030KiB/s-4092KiB/s (2078kB/s-4190kB/s), io=11.4MiB (11.9MB), run=1001-1009msec 00:14:18.905 00:14:18.905 Disk stats (read/write): 00:14:18.905 nvme0n1: ios=63/512, merge=0/0, ticks=559/202, in_queue=761, util=86.47% 00:14:18.905 nvme0n2: ios=496/512, merge=0/0, ticks=535/288, in_queue=823, util=87.31% 00:14:18.905 nvme0n3: ios=582/1024, merge=0/0, ticks=341/393, in_queue=734, util=88.19% 00:14:18.905 nvme0n4: ios=34/512, merge=0/0, ticks=632/175, in_queue=807, util=89.33% 00:14:18.905 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:18.905 [global] 00:14:18.905 thread=1 00:14:18.905 invalidate=1 00:14:18.905 rw=randwrite 00:14:18.905 time_based=1 00:14:18.905 runtime=1 00:14:18.905 ioengine=libaio 00:14:18.905 direct=1 00:14:18.905 bs=4096 00:14:18.905 iodepth=1 00:14:18.905 norandommap=0 00:14:18.905 numjobs=1 00:14:18.905 00:14:18.905 verify_dump=1 00:14:18.905 verify_backlog=512 00:14:18.905 verify_state_save=0 00:14:18.905 do_verify=1 00:14:18.905 verify=crc32c-intel 00:14:18.905 [job0] 00:14:18.905 filename=/dev/nvme0n1 00:14:18.905 [job1] 00:14:18.905 filename=/dev/nvme0n2 00:14:18.905 [job2] 00:14:18.905 filename=/dev/nvme0n3 00:14:18.905 [job3] 00:14:18.905 filename=/dev/nvme0n4 00:14:18.905 Could not set queue depth (nvme0n1) 00:14:18.905 Could not set queue depth (nvme0n2) 00:14:18.905 Could not set queue depth (nvme0n3) 00:14:18.905 Could not set queue depth (nvme0n4) 00:14:19.166 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.166 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.166 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.166 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.166 fio-3.35 00:14:19.166 Starting 4 threads 00:14:20.550 00:14:20.550 job0: (groupid=0, jobs=1): err= 0: pid=232617: Tue Oct 8 17:30:12 2024 00:14:20.550 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:20.550 slat (nsec): min=26424, max=40854, avg=26921.84, stdev=668.98 00:14:20.550 clat (usec): min=733, max=1170, avg=966.62, stdev=63.81 00:14:20.550 lat (usec): min=761, max=1196, avg=993.54, stdev=63.69 00:14:20.550 clat percentiles (usec): 00:14:20.550 | 1.00th=[ 766], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 922], 00:14:20.550 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:14:20.550 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:14:20.550 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:14:20.550 | 99.99th=[ 1172] 00:14:20.550 write: IOPS=720, BW=2881KiB/s (2950kB/s)(2884KiB/1001msec); 0 zone resets 00:14:20.550 slat (usec): min=9, max=38903, avg=83.80, stdev=1447.75 00:14:20.550 clat (usec): min=313, max=859, avg=584.63, stdev=101.45 00:14:20.550 lat (usec): min=323, max=39453, avg=668.43, stdev=1450.35 00:14:20.550 clat percentiles (usec): 00:14:20.550 | 1.00th=[ 343], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 498], 00:14:20.550 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 619], 00:14:20.550 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:14:20.550 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 857], 99.95th=[ 857], 00:14:20.550 | 99.99th=[ 857] 00:14:20.550 bw ( KiB/s): min= 4087, max= 4087, per=29.21%, avg=4087.00, stdev= 0.00, samples=1 00:14:20.550 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:20.550 lat (usec) : 500=11.76%, 750=45.01%, 1000=31.39% 00:14:20.550 lat (msec) : 2=11.84% 00:14:20.550 cpu : usr=2.70%, sys=4.60%, ctx=1236, majf=0, minf=1 00:14:20.550 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.550 issued rwts: total=512,721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.550 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.550 job1: (groupid=0, jobs=1): err= 0: pid=232619: Tue Oct 8 17:30:12 2024 00:14:20.550 read: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec) 00:14:20.550 slat (nsec): min=6880, max=43625, avg=22814.81, stdev=7030.41 00:14:20.550 clat (usec): min=344, max=1240, avg=732.42, stdev=93.98 00:14:20.550 lat (usec): min=370, max=1266, avg=755.24, stdev=95.29 00:14:20.550 clat percentiles (usec): 00:14:20.550 | 1.00th=[ 469], 5.00th=[ 570], 10.00th=[ 627], 20.00th=[ 660], 00:14:20.550 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 766], 00:14:20.550 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:14:20.550 | 99.00th=[ 906], 99.50th=[ 955], 99.90th=[ 1237], 99.95th=[ 1237], 00:14:20.550 | 99.99th=[ 1237] 00:14:20.550 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:20.550 slat (nsec): min=9476, max=62092, avg=27031.28, stdev=9242.31 00:14:20.550 clat (usec): min=123, max=1179, avg=410.30, stdev=89.44 00:14:20.550 lat (usec): min=155, max=1210, avg=437.33, stdev=91.73 00:14:20.550 clat percentiles (usec): 00:14:20.550 | 1.00th=[ 223], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 334], 00:14:20.550 | 30.00th=[ 359], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 437], 00:14:20.551 | 70.00th=[ 453], 80.00th=[ 469], 90.00th=[ 498], 95.00th=[ 537], 00:14:20.551 | 99.00th=[ 685], 99.50th=[ 734], 99.90th=[ 840], 99.95th=[ 1172], 00:14:20.551 | 99.99th=[ 1172] 00:14:20.551 bw ( KiB/s): min= 4087, max= 4087, per=29.21%, avg=4087.00, stdev= 0.00, samples=1 00:14:20.551 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:20.551 lat (usec) : 250=1.15%, 500=52.82%, 750=26.41%, 1000=19.45% 00:14:20.551 lat (msec) : 2=0.17% 00:14:20.551 cpu : usr=3.20%, sys=3.90%, ctx=1738, majf=0, minf=2 00:14:20.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 issued rwts: total=714,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.551 job2: (groupid=0, jobs=1): err= 0: pid=232620: Tue Oct 8 17:30:12 2024 00:14:20.551 read: IOPS=614, BW=2458KiB/s (2517kB/s)(2460KiB/1001msec) 00:14:20.551 slat (nsec): min=6883, max=46641, avg=24319.66, stdev=5582.74 00:14:20.551 clat (usec): min=276, max=1043, avg=742.69, stdev=141.31 00:14:20.551 lat (usec): min=302, max=1069, avg=767.01, stdev=141.97 00:14:20.551 clat percentiles (usec): 00:14:20.551 | 1.00th=[ 392], 5.00th=[ 486], 10.00th=[ 553], 20.00th=[ 619], 00:14:20.551 | 30.00th=[ 676], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 799], 00:14:20.551 | 70.00th=[ 832], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 938], 00:14:20.551 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1045], 99.95th=[ 1045], 00:14:20.551 | 99.99th=[ 1045] 00:14:20.551 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:20.551 slat (nsec): min=9547, max=63538, avg=29864.42, stdev=7546.03 00:14:20.551 clat (usec): min=131, max=1217, avg=473.76, stdev=131.73 00:14:20.551 lat (usec): min=142, max=1248, avg=503.62, stdev=133.10 00:14:20.551 clat percentiles (usec): 00:14:20.551 | 1.00th=[ 186], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 363], 00:14:20.551 | 30.00th=[ 404], 40.00th=[ 441], 50.00th=[ 478], 60.00th=[ 506], 00:14:20.551 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 644], 95.00th=[ 685], 00:14:20.551 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 1012], 99.95th=[ 1221], 00:14:20.551 | 99.99th=[ 1221] 00:14:20.551 bw ( KiB/s): min= 4087, max= 4087, per=29.21%, avg=4087.00, stdev= 0.00, samples=1 00:14:20.551 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:20.551 lat (usec) : 250=1.89%, 500=36.42%, 750=41.24%, 1000=19.71% 00:14:20.551 lat (msec) : 2=0.73% 00:14:20.551 cpu : usr=2.30%, sys=5.00%, ctx=1639, majf=0, minf=1 00:14:20.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 issued rwts: total=615,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.551 job3: (groupid=0, jobs=1): err= 0: pid=232621: Tue Oct 8 17:30:12 2024 00:14:20.551 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:20.551 slat (nsec): min=8709, max=44486, avg=26849.11, stdev=2846.29 00:14:20.551 clat (usec): min=741, max=1230, avg=1006.09, stdev=88.47 00:14:20.551 lat (usec): min=768, max=1257, avg=1032.94, stdev=88.50 00:14:20.551 clat percentiles (usec): 00:14:20.551 | 1.00th=[ 766], 5.00th=[ 816], 10.00th=[ 881], 20.00th=[ 947], 00:14:20.551 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:14:20.551 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:14:20.551 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:14:20.551 | 99.99th=[ 1237] 00:14:20.551 write: IOPS=731, BW=2925KiB/s (2995kB/s)(2928KiB/1001msec); 0 zone resets 00:14:20.551 slat (nsec): min=9518, max=86053, avg=28740.49, stdev=10443.94 00:14:20.551 clat (usec): min=209, max=976, avg=601.64, stdev=116.41 00:14:20.551 lat (usec): min=221, max=1009, avg=630.38, stdev=120.95 00:14:20.551 clat percentiles (usec): 00:14:20.551 | 1.00th=[ 343], 5.00th=[ 375], 10.00th=[ 453], 20.00th=[ 494], 00:14:20.551 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:14:20.551 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:14:20.551 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 979], 99.95th=[ 979], 00:14:20.551 | 99.99th=[ 979] 00:14:20.551 bw ( KiB/s): min= 4087, max= 4087, per=29.21%, avg=4087.00, stdev= 0.00, samples=1 00:14:20.551 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:20.551 lat (usec) : 250=0.08%, 500=12.54%, 750=41.72%, 1000=20.58% 00:14:20.551 lat (msec) : 2=25.08% 00:14:20.551 cpu : usr=1.90%, sys=3.50%, ctx=1245, majf=0, minf=1 00:14:20.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.551 issued rwts: total=512,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.551 00:14:20.551 Run status group 0 (all jobs): 00:14:20.551 READ: bw=9403KiB/s (9628kB/s), 2046KiB/s-2853KiB/s (2095kB/s-2922kB/s), io=9412KiB (9638kB), run=1001-1001msec 00:14:20.551 WRITE: bw=13.7MiB/s (14.3MB/s), 2881KiB/s-4092KiB/s (2950kB/s-4190kB/s), io=13.7MiB (14.3MB), run=1001-1001msec 00:14:20.551 00:14:20.551 Disk stats (read/write): 00:14:20.551 nvme0n1: ios=509/512, merge=0/0, ticks=982/236, in_queue=1218, util=99.60% 00:14:20.551 nvme0n2: ios=561/981, merge=0/0, ticks=517/393, in_queue=910, util=92.76% 00:14:20.551 nvme0n3: ios=568/862, merge=0/0, ticks=435/380, in_queue=815, util=92.19% 00:14:20.551 nvme0n4: ios=545/512, merge=0/0, ticks=770/294, in_queue=1064, util=97.86% 00:14:20.551 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:20.551 [global] 00:14:20.551 thread=1 00:14:20.551 invalidate=1 00:14:20.551 rw=write 00:14:20.551 time_based=1 00:14:20.551 runtime=1 00:14:20.551 ioengine=libaio 00:14:20.551 direct=1 00:14:20.551 bs=4096 00:14:20.551 iodepth=128 00:14:20.551 norandommap=0 00:14:20.551 numjobs=1 00:14:20.551 00:14:20.551 verify_dump=1 00:14:20.551 verify_backlog=512 00:14:20.551 verify_state_save=0 00:14:20.551 do_verify=1 00:14:20.551 verify=crc32c-intel 00:14:20.551 [job0] 00:14:20.551 filename=/dev/nvme0n1 00:14:20.551 [job1] 00:14:20.551 filename=/dev/nvme0n2 00:14:20.551 [job2] 00:14:20.551 filename=/dev/nvme0n3 00:14:20.551 [job3] 00:14:20.551 filename=/dev/nvme0n4 00:14:20.551 Could not set queue depth (nvme0n1) 00:14:20.551 Could not set queue depth (nvme0n2) 00:14:20.551 Could not set queue depth (nvme0n3) 00:14:20.551 Could not set queue depth (nvme0n4) 00:14:20.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:20.813 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:20.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:20.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:20.813 fio-3.35 00:14:20.813 Starting 4 threads 00:14:22.198 00:14:22.198 job0: (groupid=0, jobs=1): err= 0: pid=233097: Tue Oct 8 17:30:13 2024 00:14:22.198 read: IOPS=6454, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1002msec) 00:14:22.198 slat (nsec): min=912, max=26159k, avg=77834.55, stdev=606353.53 00:14:22.198 clat (usec): min=1062, max=92798, avg=8958.29, stdev=5615.13 00:14:22.198 lat (usec): min=2393, max=92810, avg=9036.12, stdev=5737.11 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 6783], 20.00th=[ 7242], 00:14:22.198 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:14:22.198 | 70.00th=[ 8094], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[14091], 00:14:22.198 | 99.00th=[36439], 99.50th=[48497], 99.90th=[62129], 99.95th=[62129], 00:14:22.198 | 99.99th=[92799] 00:14:22.198 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:14:22.198 slat (nsec): min=1587, max=8928.4k, avg=64697.46, stdev=390317.65 00:14:22.198 clat (usec): min=1263, max=92775, avg=10366.93, stdev=11484.43 00:14:22.198 lat (usec): min=1276, max=92777, avg=10431.62, stdev=11519.09 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 2212], 5.00th=[ 4047], 10.00th=[ 5473], 20.00th=[ 6718], 00:14:22.198 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:14:22.198 | 70.00th=[ 7701], 80.00th=[ 8848], 90.00th=[16909], 95.00th=[25297], 00:14:22.198 | 99.00th=[76022], 99.50th=[80217], 99.90th=[87557], 99.95th=[87557], 00:14:22.198 | 99.99th=[92799] 00:14:22.198 bw ( KiB/s): min=24032, max=29216, per=26.86%, avg=26624.00, stdev=3665.64, samples=2 00:14:22.198 iops : min= 6008, max= 7304, avg=6656.00, stdev=916.41, samples=2 00:14:22.198 lat (msec) : 2=0.37%, 4=2.21%, 10=80.52%, 20=11.37%, 50=4.02% 00:14:22.198 lat (msec) : 100=1.51% 00:14:22.198 cpu : usr=5.09%, sys=5.69%, ctx=730, majf=0, minf=2 00:14:22.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:22.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.198 issued rwts: total=6467,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.198 job1: (groupid=0, jobs=1): err= 0: pid=233112: Tue Oct 8 17:30:13 2024 00:14:22.198 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:14:22.198 slat (nsec): min=926, max=12748k, avg=92763.31, stdev=699897.46 00:14:22.198 clat (usec): min=3520, max=87310, avg=11790.91, stdev=8335.98 00:14:22.198 lat (usec): min=3528, max=87317, avg=11883.67, stdev=8420.78 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 6915], 00:14:22.198 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[11469], 00:14:22.198 | 70.00th=[13173], 80.00th=[14615], 90.00th=[17171], 95.00th=[20579], 00:14:22.198 | 99.00th=[59507], 99.50th=[74974], 99.90th=[84411], 99.95th=[87557], 00:14:22.198 | 99.99th=[87557] 00:14:22.198 write: IOPS=5785, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1005msec); 0 zone resets 00:14:22.198 slat (nsec): min=1661, max=12175k, avg=71358.47, stdev=554344.56 00:14:22.198 clat (usec): min=670, max=87280, avg=10457.18, stdev=9355.17 00:14:22.198 lat (usec): min=679, max=87290, avg=10528.54, stdev=9398.94 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 2057], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 6194], 00:14:22.198 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 8586], 00:14:22.198 | 70.00th=[10945], 80.00th=[13566], 90.00th=[15795], 95.00th=[22414], 00:14:22.198 | 99.00th=[66323], 99.50th=[77071], 99.90th=[80217], 99.95th=[80217], 00:14:22.198 | 99.99th=[87557] 00:14:22.198 bw ( KiB/s): min=21200, max=24488, per=23.05%, avg=22844.00, stdev=2324.97, samples=2 00:14:22.198 iops : min= 5300, max= 6122, avg=5711.00, stdev=581.24, samples=2 00:14:22.198 lat (usec) : 750=0.10%, 1000=0.01% 00:14:22.198 lat (msec) : 2=0.39%, 4=2.40%, 10=57.74%, 20=32.00%, 50=5.97% 00:14:22.198 lat (msec) : 100=1.39% 00:14:22.198 cpu : usr=3.88%, sys=7.07%, ctx=470, majf=0, minf=1 00:14:22.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:22.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.198 issued rwts: total=5632,5814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.198 job2: (groupid=0, jobs=1): err= 0: pid=233132: Tue Oct 8 17:30:13 2024 00:14:22.198 read: IOPS=7586, BW=29.6MiB/s (31.1MB/s)(29.7MiB/1003msec) 00:14:22.198 slat (nsec): min=948, max=6567.4k, avg=65392.47, stdev=435623.92 00:14:22.198 clat (usec): min=714, max=33557, avg=8657.62, stdev=2770.16 00:14:22.198 lat (usec): min=3071, max=33586, avg=8723.01, stdev=2787.73 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7570], 00:14:22.198 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:14:22.198 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11731], 00:14:22.198 | 99.00th=[28705], 99.50th=[28967], 99.90th=[33424], 99.95th=[33424], 00:14:22.198 | 99.99th=[33817] 00:14:22.198 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:14:22.198 slat (nsec): min=1653, max=14252k, avg=61135.66, stdev=419451.49 00:14:22.198 clat (usec): min=1277, max=28071, avg=7957.33, stdev=2034.52 00:14:22.198 lat (usec): min=1289, max=28075, avg=8018.46, stdev=2066.74 00:14:22.198 clat percentiles (usec): 00:14:22.198 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5604], 20.00th=[ 7308], 00:14:22.198 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7898], 60.00th=[ 8029], 00:14:22.198 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9634], 95.00th=[11076], 00:14:22.198 | 99.00th=[13304], 99.50th=[16909], 99.90th=[27919], 99.95th=[28181], 00:14:22.198 | 99.99th=[28181] 00:14:22.198 bw ( KiB/s): min=30080, max=31360, per=30.99%, avg=30720.00, stdev=905.10, samples=2 00:14:22.198 iops : min= 7520, max= 7840, avg=7680.00, stdev=226.27, samples=2 00:14:22.198 lat (usec) : 750=0.01% 00:14:22.198 lat (msec) : 2=0.02%, 4=0.56%, 10=87.40%, 20=11.18%, 50=0.83% 00:14:22.198 cpu : usr=4.59%, sys=6.19%, ctx=656, majf=0, minf=1 00:14:22.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:22.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.198 issued rwts: total=7609,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.199 job3: (groupid=0, jobs=1): err= 0: pid=233140: Tue Oct 8 17:30:13 2024 00:14:22.199 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:14:22.199 slat (nsec): min=987, max=17941k, avg=102035.04, stdev=838344.01 00:14:22.199 clat (usec): min=4708, max=39948, avg=13409.37, stdev=4441.71 00:14:22.199 lat (usec): min=4713, max=39974, avg=13511.40, stdev=4524.00 00:14:22.199 clat percentiles (usec): 00:14:22.199 | 1.00th=[ 6456], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9110], 00:14:22.199 | 30.00th=[10421], 40.00th=[11338], 50.00th=[13173], 60.00th=[14222], 00:14:22.199 | 70.00th=[15401], 80.00th=[16712], 90.00th=[19268], 95.00th=[20317], 00:14:22.199 | 99.00th=[26608], 99.50th=[26608], 99.90th=[34866], 99.95th=[34866], 00:14:22.199 | 99.99th=[40109] 00:14:22.199 write: IOPS=4829, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1010msec); 0 zone resets 00:14:22.199 slat (nsec): min=1664, max=12823k, avg=96760.59, stdev=647517.95 00:14:22.199 clat (usec): min=1245, max=76183, avg=13598.99, stdev=9791.64 00:14:22.199 lat (usec): min=1258, max=76191, avg=13695.75, stdev=9854.38 00:14:22.199 clat percentiles (usec): 00:14:22.199 | 1.00th=[ 3064], 5.00th=[ 5014], 10.00th=[ 5997], 20.00th=[ 7439], 00:14:22.199 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[13042], 00:14:22.199 | 70.00th=[15139], 80.00th=[16188], 90.00th=[22414], 95.00th=[27657], 00:14:22.199 | 99.00th=[63701], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:14:22.199 | 99.99th=[76022] 00:14:22.199 bw ( KiB/s): min=14536, max=23464, per=19.17%, avg=19000.00, stdev=6313.05, samples=2 00:14:22.199 iops : min= 3634, max= 5866, avg=4750.00, stdev=1578.26, samples=2 00:14:22.199 lat (msec) : 2=0.18%, 4=0.63%, 10=34.85%, 20=53.15%, 50=10.10% 00:14:22.199 lat (msec) : 100=1.09% 00:14:22.199 cpu : usr=3.96%, sys=5.35%, ctx=315, majf=0, minf=2 00:14:22.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:22.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.199 issued rwts: total=4608,4878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.199 00:14:22.199 Run status group 0 (all jobs): 00:14:22.199 READ: bw=94.0MiB/s (98.6MB/s), 17.8MiB/s-29.6MiB/s (18.7MB/s-31.1MB/s), io=95.0MiB (99.6MB), run=1002-1010msec 00:14:22.199 WRITE: bw=96.8MiB/s (101MB/s), 18.9MiB/s-29.9MiB/s (19.8MB/s-31.4MB/s), io=97.8MiB (103MB), run=1002-1010msec 00:14:22.199 00:14:22.199 Disk stats (read/write): 00:14:22.199 nvme0n1: ios=5169/5191, merge=0/0, ticks=27292/36882, in_queue=64174, util=83.87% 00:14:22.199 nvme0n2: ios=4663/4802, merge=0/0, ticks=51964/48669, in_queue=100633, util=91.03% 00:14:22.199 nvme0n3: ios=6168/6567, merge=0/0, ticks=36462/33326, in_queue=69788, util=91.77% 00:14:22.199 nvme0n4: ios=4153/4320, merge=0/0, ticks=51257/48464, in_queue=99721, util=96.26% 00:14:22.199 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:22.199 [global] 00:14:22.199 thread=1 00:14:22.199 invalidate=1 00:14:22.199 rw=randwrite 00:14:22.199 time_based=1 00:14:22.199 runtime=1 00:14:22.199 ioengine=libaio 00:14:22.199 direct=1 00:14:22.199 bs=4096 00:14:22.199 iodepth=128 00:14:22.199 norandommap=0 00:14:22.199 numjobs=1 00:14:22.199 00:14:22.199 verify_dump=1 00:14:22.199 verify_backlog=512 00:14:22.199 verify_state_save=0 00:14:22.199 do_verify=1 00:14:22.199 verify=crc32c-intel 00:14:22.199 [job0] 00:14:22.199 filename=/dev/nvme0n1 00:14:22.199 [job1] 00:14:22.199 filename=/dev/nvme0n2 00:14:22.199 [job2] 00:14:22.199 filename=/dev/nvme0n3 00:14:22.199 [job3] 00:14:22.199 filename=/dev/nvme0n4 00:14:22.199 Could not set queue depth (nvme0n1) 00:14:22.199 Could not set queue depth (nvme0n2) 00:14:22.199 Could not set queue depth (nvme0n3) 00:14:22.199 Could not set queue depth (nvme0n4) 00:14:22.460 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:22.460 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:22.460 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:22.460 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:22.460 fio-3.35 00:14:22.460 Starting 4 threads 00:14:23.846 00:14:23.846 job0: (groupid=0, jobs=1): err= 0: pid=233557: Tue Oct 8 17:30:15 2024 00:14:23.846 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:14:23.846 slat (nsec): min=923, max=11453k, avg=87110.63, stdev=639570.23 00:14:23.846 clat (usec): min=2684, max=59881, avg=10143.79, stdev=6341.73 00:14:23.846 lat (usec): min=2687, max=59889, avg=10230.90, stdev=6416.68 00:14:23.846 clat percentiles (usec): 00:14:23.846 | 1.00th=[ 3195], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6390], 00:14:23.846 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8356], 60.00th=[ 8979], 00:14:23.846 | 70.00th=[11338], 80.00th=[12649], 90.00th=[16909], 95.00th=[18744], 00:14:23.846 | 99.00th=[42730], 99.50th=[53740], 99.90th=[60031], 99.95th=[60031], 00:14:23.846 | 99.99th=[60031] 00:14:23.846 write: IOPS=4254, BW=16.6MiB/s (17.4MB/s)(16.9MiB/1015msec); 0 zone resets 00:14:23.846 slat (nsec): min=1585, max=8193.6k, avg=141978.84, stdev=706482.64 00:14:23.846 clat (usec): min=1179, max=84529, avg=20186.62, stdev=19719.46 00:14:23.846 lat (usec): min=1188, max=84537, avg=20328.59, stdev=19849.33 00:14:23.846 clat percentiles (usec): 00:14:23.846 | 1.00th=[ 2835], 5.00th=[ 3916], 10.00th=[ 4621], 20.00th=[ 5604], 00:14:23.846 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 9241], 60.00th=[14877], 00:14:23.846 | 70.00th=[22676], 80.00th=[38536], 90.00th=[53216], 95.00th=[62653], 00:14:23.846 | 99.00th=[74974], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:14:23.846 | 99.99th=[84411] 00:14:23.846 bw ( KiB/s): min=10368, max=23152, per=21.55%, avg=16760.00, stdev=9039.65, samples=2 00:14:23.846 iops : min= 2592, max= 5788, avg=4190.00, stdev=2259.91, samples=2 00:14:23.846 lat (msec) : 2=0.15%, 4=3.97%, 10=54.36%, 20=21.99%, 50=12.92% 00:14:23.846 lat (msec) : 100=6.61% 00:14:23.846 cpu : usr=3.16%, sys=3.85%, ctx=411, majf=0, minf=1 00:14:23.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:23.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:23.846 issued rwts: total=4096,4318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.846 job1: (groupid=0, jobs=1): err= 0: pid=233572: Tue Oct 8 17:30:15 2024 00:14:23.846 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:14:23.846 slat (nsec): min=934, max=25847k, avg=151901.00, stdev=1160144.24 00:14:23.846 clat (usec): min=4703, max=71652, avg=18686.73, stdev=17439.23 00:14:23.846 lat (usec): min=4705, max=71659, avg=18838.63, stdev=17562.52 00:14:23.846 clat percentiles (usec): 00:14:23.846 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7504], 00:14:23.846 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[12256], 00:14:23.846 | 70.00th=[16450], 80.00th=[32375], 90.00th=[51119], 95.00th=[55313], 00:14:23.846 | 99.00th=[70779], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:14:23.846 | 99.99th=[71828] 00:14:23.846 write: IOPS=4195, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1007msec); 0 zone resets 00:14:23.846 slat (nsec): min=1556, max=24111k, avg=83910.83, stdev=722286.53 00:14:23.846 clat (usec): min=1462, max=82941, avg=11912.07, stdev=10378.36 00:14:23.846 lat (usec): min=1471, max=82948, avg=11995.98, stdev=10432.11 00:14:23.846 clat percentiles (usec): 00:14:23.846 | 1.00th=[ 4080], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6980], 00:14:23.846 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8848], 00:14:23.846 | 70.00th=[10421], 80.00th=[15795], 90.00th=[23987], 95.00th=[25035], 00:14:23.846 | 99.00th=[63701], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:14:23.846 | 99.99th=[83362] 00:14:23.846 bw ( KiB/s): min=12296, max=20480, per=21.07%, avg=16388.00, stdev=5786.96, samples=2 00:14:23.846 iops : min= 3074, max= 5120, avg=4097.00, stdev=1446.74, samples=2 00:14:23.846 lat (msec) : 2=0.16%, 4=0.01%, 10=60.95%, 20=19.52%, 50=13.04% 00:14:23.846 lat (msec) : 100=6.32% 00:14:23.846 cpu : usr=2.58%, sys=3.88%, ctx=519, majf=0, minf=1 00:14:23.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:23.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:23.846 issued rwts: total=4096,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.846 job2: (groupid=0, jobs=1): err= 0: pid=233594: Tue Oct 8 17:30:15 2024 00:14:23.846 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:14:23.846 slat (nsec): min=957, max=13835k, avg=75539.80, stdev=572354.05 00:14:23.846 clat (usec): min=1201, max=47592, avg=9735.01, stdev=5124.20 00:14:23.846 lat (usec): min=1209, max=47600, avg=9810.55, stdev=5175.76 00:14:23.847 clat percentiles (usec): 00:14:23.847 | 1.00th=[ 2999], 5.00th=[ 4686], 10.00th=[ 5735], 20.00th=[ 6390], 00:14:23.847 | 30.00th=[ 6652], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8979], 00:14:23.847 | 70.00th=[10683], 80.00th=[12649], 90.00th=[15664], 95.00th=[17957], 00:14:23.847 | 99.00th=[31327], 99.50th=[36963], 99.90th=[44303], 99.95th=[47449], 00:14:23.847 | 99.99th=[47449] 00:14:23.847 write: IOPS=7075, BW=27.6MiB/s (29.0MB/s)(27.8MiB/1006msec); 0 zone resets 00:14:23.847 slat (nsec): min=1671, max=15806k, avg=56260.15, stdev=489356.36 00:14:23.847 clat (usec): min=372, max=59555, avg=8824.87, stdev=8045.80 00:14:23.847 lat (usec): min=406, max=59559, avg=8881.13, stdev=8071.86 00:14:23.847 clat percentiles (usec): 00:14:23.847 | 1.00th=[ 1156], 5.00th=[ 2008], 10.00th=[ 3064], 20.00th=[ 4752], 00:14:23.847 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 7046], 00:14:23.847 | 70.00th=[ 7570], 80.00th=[10290], 90.00th=[16909], 95.00th=[23462], 00:14:23.847 | 99.00th=[47449], 99.50th=[50594], 99.90th=[57934], 99.95th=[59507], 00:14:23.847 | 99.99th=[59507] 00:14:23.847 bw ( KiB/s): min=27264, max=28656, per=35.94%, avg=27960.00, stdev=984.29, samples=2 00:14:23.847 iops : min= 6816, max= 7164, avg=6990.00, stdev=246.07, samples=2 00:14:23.847 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.18% 00:14:23.847 lat (msec) : 2=2.30%, 4=6.24%, 10=64.86%, 20=21.61%, 50=4.44% 00:14:23.847 lat (msec) : 100=0.26% 00:14:23.847 cpu : usr=5.17%, sys=9.05%, ctx=452, majf=0, minf=2 00:14:23.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:23.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:23.847 issued rwts: total=6656,7118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.847 job3: (groupid=0, jobs=1): err= 0: pid=233601: Tue Oct 8 17:30:15 2024 00:14:23.847 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:14:23.847 slat (nsec): min=1012, max=16333k, avg=97410.90, stdev=711838.93 00:14:23.847 clat (usec): min=5475, max=37133, avg=12217.63, stdev=5352.90 00:14:23.847 lat (usec): min=5481, max=37137, avg=12315.04, stdev=5406.39 00:14:23.847 clat percentiles (usec): 00:14:23.847 | 1.00th=[ 6325], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 8160], 00:14:23.847 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[11469], 00:14:23.847 | 70.00th=[13435], 80.00th=[16057], 90.00th=[20317], 95.00th=[22938], 00:14:23.847 | 99.00th=[30540], 99.50th=[32113], 99.90th=[36963], 99.95th=[36963], 00:14:23.847 | 99.99th=[36963] 00:14:23.847 write: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.9MiB/1015msec); 0 zone resets 00:14:23.847 slat (nsec): min=1579, max=16115k, avg=151103.52, stdev=828264.95 00:14:23.847 clat (usec): min=1074, max=71588, avg=20837.65, stdev=17044.71 00:14:23.847 lat (usec): min=1106, max=71596, avg=20988.75, stdev=17157.47 00:14:23.847 clat percentiles (usec): 00:14:23.847 | 1.00th=[ 3752], 5.00th=[ 4883], 10.00th=[ 6128], 20.00th=[ 7177], 00:14:23.847 | 30.00th=[ 7963], 40.00th=[ 9896], 50.00th=[14877], 60.00th=[20055], 00:14:23.847 | 70.00th=[24773], 80.00th=[34341], 90.00th=[48497], 95.00th=[56886], 00:14:23.847 | 99.00th=[70779], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:14:23.847 | 99.99th=[71828] 00:14:23.847 bw ( KiB/s): min=15216, max=16384, per=20.31%, avg=15800.00, stdev=825.90, samples=2 00:14:23.847 iops : min= 3804, max= 4096, avg=3950.00, stdev=206.48, samples=2 00:14:23.847 lat (msec) : 2=0.04%, 4=1.02%, 10=44.19%, 20=27.24%, 50=22.58% 00:14:23.847 lat (msec) : 100=4.93% 00:14:23.847 cpu : usr=2.76%, sys=4.93%, ctx=302, majf=0, minf=1 00:14:23.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:23.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:23.847 issued rwts: total=3584,4078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.847 00:14:23.847 Run status group 0 (all jobs): 00:14:23.847 READ: bw=70.9MiB/s (74.4MB/s), 13.8MiB/s-25.8MiB/s (14.5MB/s-27.1MB/s), io=72.0MiB (75.5MB), run=1006-1015msec 00:14:23.847 WRITE: bw=76.0MiB/s (79.7MB/s), 15.7MiB/s-27.6MiB/s (16.5MB/s-29.0MB/s), io=77.1MiB (80.8MB), run=1006-1015msec 00:14:23.847 00:14:23.847 Disk stats (read/write): 00:14:23.847 nvme0n1: ios=3630/3919, merge=0/0, ticks=29842/70398, in_queue=100240, util=87.17% 00:14:23.847 nvme0n2: ios=3831/4096, merge=0/0, ticks=19383/16882, in_queue=36265, util=90.11% 00:14:23.847 nvme0n3: ios=5257/5750, merge=0/0, ticks=44881/50517, in_queue=95398, util=92.30% 00:14:23.847 nvme0n4: ios=3627/3599, merge=0/0, ticks=35235/50620, in_queue=85855, util=96.47% 00:14:23.847 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:23.847 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=233704 00:14:23.847 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:23.847 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:23.847 [global] 00:14:23.847 thread=1 00:14:23.847 invalidate=1 00:14:23.847 rw=read 00:14:23.847 time_based=1 00:14:23.847 runtime=10 00:14:23.847 ioengine=libaio 00:14:23.847 direct=1 00:14:23.847 bs=4096 00:14:23.847 iodepth=1 00:14:23.847 norandommap=1 00:14:23.847 numjobs=1 00:14:23.847 00:14:23.847 [job0] 00:14:23.847 filename=/dev/nvme0n1 00:14:23.847 [job1] 00:14:23.847 filename=/dev/nvme0n2 00:14:23.847 [job2] 00:14:23.847 filename=/dev/nvme0n3 00:14:23.847 [job3] 00:14:23.847 filename=/dev/nvme0n4 00:14:23.847 Could not set queue depth (nvme0n1) 00:14:23.847 Could not set queue depth (nvme0n2) 00:14:23.847 Could not set queue depth (nvme0n3) 00:14:23.847 Could not set queue depth (nvme0n4) 00:14:24.108 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.108 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.108 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.108 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.108 fio-3.35 00:14:24.108 Starting 4 threads 00:14:26.653 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:26.914 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:14:26.914 fio: pid=234112, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:26.914 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:27.174 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10526720, buflen=4096 00:14:27.174 fio: pid=234105, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:27.174 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.174 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:27.436 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=802816, buflen=4096 00:14:27.436 fio: pid=234068, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:27.436 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.436 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:27.436 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7663616, buflen=4096 00:14:27.436 fio: pid=234084, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:27.436 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.436 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:27.436 00:14:27.436 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=234068: Tue Oct 8 17:30:19 2024 00:14:27.436 read: IOPS=66, BW=264KiB/s (270kB/s)(784KiB/2971msec) 00:14:27.436 slat (usec): min=6, max=11728, avg=82.48, stdev=834.00 00:14:27.436 clat (usec): min=521, max=43008, avg=14954.20, stdev=19475.88 00:14:27.436 lat (usec): min=545, max=52998, avg=15036.98, stdev=19575.83 00:14:27.436 clat percentiles (usec): 00:14:27.436 | 1.00th=[ 594], 5.00th=[ 775], 10.00th=[ 848], 20.00th=[ 898], 00:14:27.436 | 30.00th=[ 947], 40.00th=[ 988], 50.00th=[ 1029], 60.00th=[ 1074], 00:14:27.436 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:14:27.436 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:14:27.436 | 99.99th=[43254] 00:14:27.436 bw ( KiB/s): min= 96, max= 720, per=4.95%, avg=296.00, stdev=280.11, samples=5 00:14:27.436 iops : min= 24, max= 180, avg=74.00, stdev=70.03, samples=5 00:14:27.436 lat (usec) : 750=3.55%, 1000=39.09% 00:14:27.436 lat (msec) : 2=22.84%, 50=34.01% 00:14:27.436 cpu : usr=0.07%, sys=0.17%, ctx=198, majf=0, minf=1 00:14:27.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 issued rwts: total=197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.436 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=234084: Tue Oct 8 17:30:19 2024 00:14:27.436 read: IOPS=595, BW=2383KiB/s (2440kB/s)(7484KiB/3141msec) 00:14:27.436 slat (usec): min=6, max=11983, avg=34.60, stdev=315.06 00:14:27.436 clat (usec): min=239, max=43019, avg=1625.99, stdev=5834.93 00:14:27.436 lat (usec): min=264, max=43044, avg=1660.60, stdev=5843.81 00:14:27.436 clat percentiles (usec): 00:14:27.436 | 1.00th=[ 359], 5.00th=[ 490], 10.00th=[ 562], 20.00th=[ 644], 00:14:27.436 | 30.00th=[ 709], 40.00th=[ 766], 50.00th=[ 824], 60.00th=[ 865], 00:14:27.436 | 70.00th=[ 906], 80.00th=[ 947], 90.00th=[ 979], 95.00th=[ 1020], 00:14:27.436 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:14:27.436 | 99.99th=[43254] 00:14:27.436 bw ( KiB/s): min= 96, max= 5080, per=40.70%, avg=2436.17, stdev=2413.01, samples=6 00:14:27.436 iops : min= 24, max= 1270, avg=609.00, stdev=603.29, samples=6 00:14:27.436 lat (usec) : 250=0.05%, 500=5.50%, 750=31.46%, 1000=56.41% 00:14:27.436 lat (msec) : 2=4.49%, 50=2.03% 00:14:27.436 cpu : usr=0.70%, sys=1.69%, ctx=1876, majf=0, minf=2 00:14:27.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.436 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=234105: Tue Oct 8 17:30:19 2024 00:14:27.436 read: IOPS=923, BW=3691KiB/s (3780kB/s)(10.0MiB/2785msec) 00:14:27.436 slat (usec): min=6, max=7936, avg=29.64, stdev=190.44 00:14:27.436 clat (usec): min=543, max=42011, avg=1039.02, stdev=1750.33 00:14:27.436 lat (usec): min=560, max=42036, avg=1068.66, stdev=1760.53 00:14:27.436 clat percentiles (usec): 00:14:27.436 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 889], 00:14:27.436 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:14:27.436 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1106], 00:14:27.436 | 99.00th=[ 1188], 99.50th=[ 1287], 99.90th=[42206], 99.95th=[42206], 00:14:27.436 | 99.99th=[42206] 00:14:27.436 bw ( KiB/s): min= 3040, max= 4048, per=61.77%, avg=3697.60, stdev=452.42, samples=5 00:14:27.436 iops : min= 760, max= 1012, avg=924.40, stdev=113.11, samples=5 00:14:27.436 lat (usec) : 750=2.10%, 1000=65.62% 00:14:27.436 lat (msec) : 2=32.05%, 50=0.19% 00:14:27.436 cpu : usr=0.75%, sys=2.98%, ctx=2573, majf=0, minf=2 00:14:27.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.436 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=234112: Tue Oct 8 17:30:19 2024 00:14:27.436 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(252KiB/2607msec) 00:14:27.436 slat (nsec): min=24683, max=66375, avg=25732.89, stdev=5164.13 00:14:27.436 clat (usec): min=650, max=42525, avg=41003.07, stdev=5190.76 00:14:27.436 lat (usec): min=716, max=42551, avg=41028.81, stdev=5185.58 00:14:27.436 clat percentiles (usec): 00:14:27.436 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:27.436 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:14:27.436 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:27.436 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:27.436 | 99.99th=[42730] 00:14:27.436 bw ( KiB/s): min= 96, max= 104, per=1.62%, avg=97.60, stdev= 3.58, samples=5 00:14:27.436 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:14:27.436 lat (usec) : 750=1.56% 00:14:27.436 lat (msec) : 50=96.88% 00:14:27.436 cpu : usr=0.00%, sys=0.12%, ctx=65, majf=0, minf=2 00:14:27.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.436 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.436 00:14:27.436 Run status group 0 (all jobs): 00:14:27.437 READ: bw=5985KiB/s (6129kB/s), 96.7KiB/s-3691KiB/s (99.0kB/s-3780kB/s), io=18.4MiB (19.3MB), run=2607-3141msec 00:14:27.437 00:14:27.437 Disk stats (read/write): 00:14:27.437 nvme0n1: ios=193/0, merge=0/0, ticks=2803/0, in_queue=2803, util=94.36% 00:14:27.437 nvme0n2: ios=1839/0, merge=0/0, ticks=2956/0, in_queue=2956, util=95.14% 00:14:27.437 nvme0n3: ios=2395/0, merge=0/0, ticks=2488/0, in_queue=2488, util=95.99% 00:14:27.437 nvme0n4: ios=63/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.39% 00:14:27.697 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.697 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:27.958 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.958 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:27.958 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:27.958 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:28.218 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:28.218 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 233704 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:28.479 nvmf hotplug test: fio failed as expected 00:14:28.479 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.739 rmmod nvme_tcp 00:14:28.739 rmmod nvme_fabrics 00:14:28.739 rmmod nvme_keyring 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 229695 ']' 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 229695 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 229695 ']' 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 229695 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229695 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229695' 00:14:28.739 killing process with pid 229695 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 229695 00:14:28.739 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 229695 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.000 17:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.913 17:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.173 00:14:31.173 real 0m29.464s 00:14:31.173 user 2m31.332s 00:14:31.173 sys 0m9.753s 00:14:31.173 17:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.173 17:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.173 ************************************ 00:14:31.173 END TEST nvmf_fio_target 00:14:31.174 ************************************ 00:14:31.174 17:30:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:31.174 17:30:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.174 17:30:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.174 17:30:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:31.174 ************************************ 00:14:31.174 START TEST nvmf_bdevio 00:14:31.174 ************************************ 00:14:31.174 17:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:31.174 * Looking for test storage... 00:14:31.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.174 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:31.174 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:14:31.174 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.434 --rc genhtml_branch_coverage=1 00:14:31.434 --rc genhtml_function_coverage=1 00:14:31.434 --rc genhtml_legend=1 00:14:31.434 --rc geninfo_all_blocks=1 00:14:31.434 --rc geninfo_unexecuted_blocks=1 00:14:31.434 00:14:31.434 ' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.434 --rc genhtml_branch_coverage=1 00:14:31.434 --rc genhtml_function_coverage=1 00:14:31.434 --rc genhtml_legend=1 00:14:31.434 --rc geninfo_all_blocks=1 00:14:31.434 --rc geninfo_unexecuted_blocks=1 00:14:31.434 00:14:31.434 ' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.434 --rc genhtml_branch_coverage=1 00:14:31.434 --rc genhtml_function_coverage=1 00:14:31.434 --rc genhtml_legend=1 00:14:31.434 --rc geninfo_all_blocks=1 00:14:31.434 --rc geninfo_unexecuted_blocks=1 00:14:31.434 00:14:31.434 ' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.434 --rc genhtml_branch_coverage=1 00:14:31.434 --rc genhtml_function_coverage=1 00:14:31.434 --rc genhtml_legend=1 00:14:31.434 --rc geninfo_all_blocks=1 00:14:31.434 --rc geninfo_unexecuted_blocks=1 00:14:31.434 00:14:31.434 ' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.434 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.435 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:39.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:39.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:39.583 Found net devices under 0000:31:00.0: cvl_0_0 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:39.583 Found net devices under 0000:31:00.1: cvl_0_1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:39.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:14:39.583 00:14:39.583 --- 10.0.0.2 ping statistics --- 00:14:39.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.583 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:14:39.583 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:14:39.584 00:14:39.584 --- 10.0.0.1 ping statistics --- 00:14:39.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.584 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=239304 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 239304 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 239304 ']' 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.584 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.584 [2024-10-08 17:30:30.926664] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:14:39.584 [2024-10-08 17:30:30.926724] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.584 [2024-10-08 17:30:31.014678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.584 [2024-10-08 17:30:31.104441] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.584 [2024-10-08 17:30:31.104494] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.584 [2024-10-08 17:30:31.104503] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.584 [2024-10-08 17:30:31.104510] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.584 [2024-10-08 17:30:31.104516] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.584 [2024-10-08 17:30:31.106557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:14:39.584 [2024-10-08 17:30:31.106718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:14:39.584 [2024-10-08 17:30:31.106764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.584 [2024-10-08 17:30:31.106765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 [2024-10-08 17:30:31.804717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 Malloc0 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.846 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.108 [2024-10-08 17:30:31.869807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:14:40.108 { 00:14:40.108 "params": { 00:14:40.108 "name": "Nvme$subsystem", 00:14:40.108 "trtype": "$TEST_TRANSPORT", 00:14:40.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.108 "adrfam": "ipv4", 00:14:40.108 "trsvcid": "$NVMF_PORT", 00:14:40.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.108 "hdgst": ${hdgst:-false}, 00:14:40.108 "ddgst": ${ddgst:-false} 00:14:40.108 }, 00:14:40.108 "method": "bdev_nvme_attach_controller" 00:14:40.108 } 00:14:40.108 EOF 00:14:40.108 )") 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:14:40.108 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:14:40.108 "params": { 00:14:40.108 "name": "Nvme1", 00:14:40.108 "trtype": "tcp", 00:14:40.108 "traddr": "10.0.0.2", 00:14:40.108 "adrfam": "ipv4", 00:14:40.108 "trsvcid": "4420", 00:14:40.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.108 "hdgst": false, 00:14:40.108 "ddgst": false 00:14:40.108 }, 00:14:40.108 "method": "bdev_nvme_attach_controller" 00:14:40.108 }' 00:14:40.108 [2024-10-08 17:30:31.926244] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:14:40.108 [2024-10-08 17:30:31.926315] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239572 ] 00:14:40.108 [2024-10-08 17:30:32.009554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:40.369 [2024-10-08 17:30:32.108212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.369 [2024-10-08 17:30:32.108443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.369 [2024-10-08 17:30:32.108445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.631 I/O targets: 00:14:40.631 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:40.631 00:14:40.631 00:14:40.631 CUnit - A unit testing framework for C - Version 2.1-3 00:14:40.631 http://cunit.sourceforge.net/ 00:14:40.631 00:14:40.631 00:14:40.631 Suite: bdevio tests on: Nvme1n1 00:14:40.631 Test: blockdev write read block ...passed 00:14:40.631 Test: blockdev write zeroes read block ...passed 00:14:40.631 Test: blockdev write zeroes read no split ...passed 00:14:40.631 Test: blockdev write zeroes read split ...passed 00:14:40.631 Test: blockdev write zeroes read split partial ...passed 00:14:40.631 Test: blockdev reset ...[2024-10-08 17:30:32.577962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:40.631 [2024-10-08 17:30:32.578077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73000 (9): Bad file descriptor 00:14:40.893 [2024-10-08 17:30:32.630497] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:40.893 passed 00:14:40.893 Test: blockdev write read 8 blocks ...passed 00:14:40.893 Test: blockdev write read size > 128k ...passed 00:14:40.893 Test: blockdev write read invalid size ...passed 00:14:40.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.893 Test: blockdev write read max offset ...passed 00:14:40.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.893 Test: blockdev writev readv 8 blocks ...passed 00:14:40.893 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.893 Test: blockdev writev readv block ...passed 00:14:40.893 Test: blockdev writev readv size > 128k ...passed 00:14:40.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.893 Test: blockdev comparev and writev ...[2024-10-08 17:30:32.855672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.855722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.855738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.855747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.856315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.856329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.856343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.856350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.856899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.856913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.856927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.856935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.857522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.857535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:40.893 [2024-10-08 17:30:32.857549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:40.893 [2024-10-08 17:30:32.857557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:41.155 passed 00:14:41.155 Test: blockdev nvme passthru rw ...passed 00:14:41.155 Test: blockdev nvme passthru vendor specific ...[2024-10-08 17:30:32.941791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:41.155 [2024-10-08 17:30:32.941810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:41.155 [2024-10-08 17:30:32.942187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:41.155 [2024-10-08 17:30:32.942204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:41.155 [2024-10-08 17:30:32.942574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:41.155 [2024-10-08 17:30:32.942585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:41.155 [2024-10-08 17:30:32.942954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:41.155 [2024-10-08 17:30:32.942966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:41.155 passed 00:14:41.155 Test: blockdev nvme admin passthru ...passed 00:14:41.155 Test: blockdev copy ...passed 00:14:41.155 00:14:41.155 Run Summary: Type Total Ran Passed Failed Inactive 00:14:41.155 suites 1 1 n/a 0 0 00:14:41.155 tests 23 23 23 0 0 00:14:41.155 asserts 152 152 152 0 n/a 00:14:41.155 00:14:41.155 Elapsed time = 1.210 seconds 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:41.416 rmmod nvme_tcp 00:14:41.416 rmmod nvme_fabrics 00:14:41.416 rmmod nvme_keyring 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 239304 ']' 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 239304 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 239304 ']' 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 239304 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 239304 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 239304' 00:14:41.416 killing process with pid 239304 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 239304 00:14:41.416 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 239304 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.679 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.228 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:44.229 00:14:44.229 real 0m12.614s 00:14:44.229 user 0m14.199s 00:14:44.229 sys 0m6.445s 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:44.229 ************************************ 00:14:44.229 END TEST nvmf_bdevio 00:14:44.229 ************************************ 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:44.229 00:14:44.229 real 5m6.621s 00:14:44.229 user 11m48.738s 00:14:44.229 sys 1m51.771s 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:44.229 ************************************ 00:14:44.229 END TEST nvmf_target_core 00:14:44.229 ************************************ 00:14:44.229 17:30:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:44.229 17:30:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:44.229 17:30:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.229 17:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.229 ************************************ 00:14:44.229 START TEST nvmf_target_extra 00:14:44.229 ************************************ 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:44.229 * Looking for test storage... 00:14:44.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.229 --rc genhtml_branch_coverage=1 00:14:44.229 --rc genhtml_function_coverage=1 00:14:44.229 --rc genhtml_legend=1 00:14:44.229 --rc geninfo_all_blocks=1 00:14:44.229 --rc geninfo_unexecuted_blocks=1 00:14:44.229 00:14:44.229 ' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.229 --rc genhtml_branch_coverage=1 00:14:44.229 --rc genhtml_function_coverage=1 00:14:44.229 --rc genhtml_legend=1 00:14:44.229 --rc geninfo_all_blocks=1 00:14:44.229 --rc geninfo_unexecuted_blocks=1 00:14:44.229 00:14:44.229 ' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.229 --rc genhtml_branch_coverage=1 00:14:44.229 --rc genhtml_function_coverage=1 00:14:44.229 --rc genhtml_legend=1 00:14:44.229 --rc geninfo_all_blocks=1 00:14:44.229 --rc geninfo_unexecuted_blocks=1 00:14:44.229 00:14:44.229 ' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.229 --rc genhtml_branch_coverage=1 00:14:44.229 --rc genhtml_function_coverage=1 00:14:44.229 --rc genhtml_legend=1 00:14:44.229 --rc geninfo_all_blocks=1 00:14:44.229 --rc geninfo_unexecuted_blocks=1 00:14:44.229 00:14:44.229 ' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.229 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.230 17:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.230 ************************************ 00:14:44.230 START TEST nvmf_example 00:14:44.230 ************************************ 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:44.230 * Looking for test storage... 00:14:44.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.230 --rc genhtml_branch_coverage=1 00:14:44.230 --rc genhtml_function_coverage=1 00:14:44.230 --rc genhtml_legend=1 00:14:44.230 --rc geninfo_all_blocks=1 00:14:44.230 --rc geninfo_unexecuted_blocks=1 00:14:44.230 00:14:44.230 ' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.230 --rc genhtml_branch_coverage=1 00:14:44.230 --rc genhtml_function_coverage=1 00:14:44.230 --rc genhtml_legend=1 00:14:44.230 --rc geninfo_all_blocks=1 00:14:44.230 --rc geninfo_unexecuted_blocks=1 00:14:44.230 00:14:44.230 ' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.230 --rc genhtml_branch_coverage=1 00:14:44.230 --rc genhtml_function_coverage=1 00:14:44.230 --rc genhtml_legend=1 00:14:44.230 --rc geninfo_all_blocks=1 00:14:44.230 --rc geninfo_unexecuted_blocks=1 00:14:44.230 00:14:44.230 ' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.230 --rc genhtml_branch_coverage=1 00:14:44.230 --rc genhtml_function_coverage=1 00:14:44.230 --rc genhtml_legend=1 00:14:44.230 --rc geninfo_all_blocks=1 00:14:44.230 --rc geninfo_unexecuted_blocks=1 00:14:44.230 00:14:44.230 ' 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.230 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:44.492 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:52.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:52.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:52.638 Found net devices under 0000:31:00.0: cvl_0_0 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:52.638 Found net devices under 0000:31:00.1: cvl_0_1 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:52.638 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:52.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:14:52.639 00:14:52.639 --- 10.0.0.2 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:52.639 00:14:52.639 --- 10.0.0.1 ping statistics --- 00:14:52.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.639 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=244228 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 244228 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 244228 ']' 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:52.639 17:30:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.901 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:53.162 17:30:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:05.402 Initializing NVMe Controllers 00:15:05.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:05.402 Initialization complete. Launching workers. 00:15:05.402 ======================================================== 00:15:05.402 Latency(us) 00:15:05.402 Device Information : IOPS MiB/s Average min max 00:15:05.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18639.12 72.81 3433.29 581.48 15532.02 00:15:05.402 ======================================================== 00:15:05.402 Total : 18639.12 72.81 3433.29 581.48 15532.02 00:15:05.402 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:05.402 rmmod nvme_tcp 00:15:05.402 rmmod nvme_fabrics 00:15:05.402 rmmod nvme_keyring 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 244228 ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 244228 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 244228 ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 244228 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 244228 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 244228' 00:15:05.402 killing process with pid 244228 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 244228 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 244228 00:15:05.402 nvmf threads initialize successfully 00:15:05.402 bdev subsystem init successfully 00:15:05.402 created a nvmf target service 00:15:05.402 create targets's poll groups done 00:15:05.402 all subsystems of target started 00:15:05.402 nvmf target is running 00:15:05.402 all subsystems of target stopped 00:15:05.402 destroy targets's poll groups done 00:15:05.402 destroyed the nvmf target service 00:15:05.402 bdev subsystem finish successfully 00:15:05.402 nvmf threads destroy successfully 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.402 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:05.663 00:15:05.663 real 0m21.635s 00:15:05.663 user 0m47.085s 00:15:05.663 sys 0m7.047s 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.663 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:05.663 ************************************ 00:15:05.663 END TEST nvmf_example 00:15:05.663 ************************************ 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:05.926 ************************************ 00:15:05.926 START TEST nvmf_filesystem 00:15:05.926 ************************************ 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:05.926 * Looking for test storage... 00:15:05.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:05.926 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.194 --rc genhtml_branch_coverage=1 00:15:06.194 --rc genhtml_function_coverage=1 00:15:06.194 --rc genhtml_legend=1 00:15:06.194 --rc geninfo_all_blocks=1 00:15:06.194 --rc geninfo_unexecuted_blocks=1 00:15:06.194 00:15:06.194 ' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.194 --rc genhtml_branch_coverage=1 00:15:06.194 --rc genhtml_function_coverage=1 00:15:06.194 --rc genhtml_legend=1 00:15:06.194 --rc geninfo_all_blocks=1 00:15:06.194 --rc geninfo_unexecuted_blocks=1 00:15:06.194 00:15:06.194 ' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.194 --rc genhtml_branch_coverage=1 00:15:06.194 --rc genhtml_function_coverage=1 00:15:06.194 --rc genhtml_legend=1 00:15:06.194 --rc geninfo_all_blocks=1 00:15:06.194 --rc geninfo_unexecuted_blocks=1 00:15:06.194 00:15:06.194 ' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.194 --rc genhtml_branch_coverage=1 00:15:06.194 --rc genhtml_function_coverage=1 00:15:06.194 --rc genhtml_legend=1 00:15:06.194 --rc geninfo_all_blocks=1 00:15:06.194 --rc geninfo_unexecuted_blocks=1 00:15:06.194 00:15:06.194 ' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:15:06.194 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:06.195 #define SPDK_CONFIG_H 00:15:06.195 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:06.195 #define SPDK_CONFIG_APPS 1 00:15:06.195 #define SPDK_CONFIG_ARCH native 00:15:06.195 #undef SPDK_CONFIG_ASAN 00:15:06.195 #undef SPDK_CONFIG_AVAHI 00:15:06.195 #undef SPDK_CONFIG_CET 00:15:06.195 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:06.195 #define SPDK_CONFIG_COVERAGE 1 00:15:06.195 #define SPDK_CONFIG_CROSS_PREFIX 00:15:06.195 #undef SPDK_CONFIG_CRYPTO 00:15:06.195 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:06.195 #undef SPDK_CONFIG_CUSTOMOCF 00:15:06.195 #undef SPDK_CONFIG_DAOS 00:15:06.195 #define SPDK_CONFIG_DAOS_DIR 00:15:06.195 #define SPDK_CONFIG_DEBUG 1 00:15:06.195 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:06.195 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:06.195 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:06.195 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:06.195 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:06.195 #undef SPDK_CONFIG_DPDK_UADK 00:15:06.195 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:06.195 #define SPDK_CONFIG_EXAMPLES 1 00:15:06.195 #undef SPDK_CONFIG_FC 00:15:06.195 #define SPDK_CONFIG_FC_PATH 00:15:06.195 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:06.195 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:06.195 #define SPDK_CONFIG_FSDEV 1 00:15:06.195 #undef SPDK_CONFIG_FUSE 00:15:06.195 #undef SPDK_CONFIG_FUZZER 00:15:06.195 #define SPDK_CONFIG_FUZZER_LIB 00:15:06.195 #undef SPDK_CONFIG_GOLANG 00:15:06.195 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:06.195 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:06.195 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:06.195 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:06.195 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:06.195 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:06.195 #undef SPDK_CONFIG_HAVE_LZ4 00:15:06.195 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:06.195 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:06.195 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:06.195 #define SPDK_CONFIG_IDXD 1 00:15:06.195 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:06.195 #undef SPDK_CONFIG_IPSEC_MB 00:15:06.195 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:06.195 #define SPDK_CONFIG_ISAL 1 00:15:06.195 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:06.195 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:06.195 #define SPDK_CONFIG_LIBDIR 00:15:06.195 #undef SPDK_CONFIG_LTO 00:15:06.195 #define SPDK_CONFIG_MAX_LCORES 128 00:15:06.195 #define SPDK_CONFIG_NVME_CUSE 1 00:15:06.195 #undef SPDK_CONFIG_OCF 00:15:06.195 #define SPDK_CONFIG_OCF_PATH 00:15:06.195 #define SPDK_CONFIG_OPENSSL_PATH 00:15:06.195 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:06.195 #define SPDK_CONFIG_PGO_DIR 00:15:06.195 #undef SPDK_CONFIG_PGO_USE 00:15:06.195 #define SPDK_CONFIG_PREFIX /usr/local 00:15:06.195 #undef SPDK_CONFIG_RAID5F 00:15:06.195 #undef SPDK_CONFIG_RBD 00:15:06.195 #define SPDK_CONFIG_RDMA 1 00:15:06.195 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:06.195 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:06.195 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:06.195 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:06.195 #define SPDK_CONFIG_SHARED 1 00:15:06.195 #undef SPDK_CONFIG_SMA 00:15:06.195 #define SPDK_CONFIG_TESTS 1 00:15:06.195 #undef SPDK_CONFIG_TSAN 00:15:06.195 #define SPDK_CONFIG_UBLK 1 00:15:06.195 #define SPDK_CONFIG_UBSAN 1 00:15:06.195 #undef SPDK_CONFIG_UNIT_TESTS 00:15:06.195 #undef SPDK_CONFIG_URING 00:15:06.195 #define SPDK_CONFIG_URING_PATH 00:15:06.195 #undef SPDK_CONFIG_URING_ZNS 00:15:06.195 #undef SPDK_CONFIG_USDT 00:15:06.195 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:06.195 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:06.195 #define SPDK_CONFIG_VFIO_USER 1 00:15:06.195 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:06.195 #define SPDK_CONFIG_VHOST 1 00:15:06.195 #define SPDK_CONFIG_VIRTIO 1 00:15:06.195 #undef SPDK_CONFIG_VTUNE 00:15:06.195 #define SPDK_CONFIG_VTUNE_DIR 00:15:06.195 #define SPDK_CONFIG_WERROR 1 00:15:06.195 #define SPDK_CONFIG_WPDK_DIR 00:15:06.195 #undef SPDK_CONFIG_XNVME 00:15:06.195 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:06.195 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:06.196 17:30:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:06.196 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:06.196 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:06.196 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:06.196 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:06.197 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 247111 ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 247111 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.4vZ3P3 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4vZ3P3/tests/target /tmp/spdk.4vZ3P3 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:15:06.198 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123668819968 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356529664 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5687709696 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668233728 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847898112 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23408640 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64678084608 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=180224 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:15:06.199 * Looking for test storage... 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123668819968 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=7902302208 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.199 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:06.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.462 --rc genhtml_branch_coverage=1 00:15:06.462 --rc genhtml_function_coverage=1 00:15:06.462 --rc genhtml_legend=1 00:15:06.462 --rc geninfo_all_blocks=1 00:15:06.462 --rc geninfo_unexecuted_blocks=1 00:15:06.462 00:15:06.462 ' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:06.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.462 --rc genhtml_branch_coverage=1 00:15:06.462 --rc genhtml_function_coverage=1 00:15:06.462 --rc genhtml_legend=1 00:15:06.462 --rc geninfo_all_blocks=1 00:15:06.462 --rc geninfo_unexecuted_blocks=1 00:15:06.462 00:15:06.462 ' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:06.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.462 --rc genhtml_branch_coverage=1 00:15:06.462 --rc genhtml_function_coverage=1 00:15:06.462 --rc genhtml_legend=1 00:15:06.462 --rc geninfo_all_blocks=1 00:15:06.462 --rc geninfo_unexecuted_blocks=1 00:15:06.462 00:15:06.462 ' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:06.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.462 --rc genhtml_branch_coverage=1 00:15:06.462 --rc genhtml_function_coverage=1 00:15:06.462 --rc genhtml_legend=1 00:15:06.462 --rc geninfo_all_blocks=1 00:15:06.462 --rc geninfo_unexecuted_blocks=1 00:15:06.462 00:15:06.462 ' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.462 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:06.463 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:14.608 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.608 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:14.608 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:14.608 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:14.608 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:14.609 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:14.609 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:14.609 Found net devices under 0000:31:00.0: cvl_0_0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:14.609 Found net devices under 0000:31:00.1: cvl_0_1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:14.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:15:14.609 00:15:14.609 --- 10.0.0.2 ping statistics --- 00:15:14.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.609 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:15:14.609 00:15:14.609 --- 10.0.0.1 ping statistics --- 00:15:14.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.609 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:14.609 ************************************ 00:15:14.609 START TEST nvmf_filesystem_no_in_capsule 00:15:14.609 ************************************ 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=250944 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 250944 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 250944 ']' 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.609 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:14.609 [2024-10-08 17:31:06.037542] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:15:14.609 [2024-10-08 17:31:06.037607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.609 [2024-10-08 17:31:06.129099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.609 [2024-10-08 17:31:06.224718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.609 [2024-10-08 17:31:06.224774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.609 [2024-10-08 17:31:06.224783] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.609 [2024-10-08 17:31:06.224790] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.609 [2024-10-08 17:31:06.224796] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.609 [2024-10-08 17:31:06.226997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.609 [2024-10-08 17:31:06.227143] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.609 [2024-10-08 17:31:06.227391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.609 [2024-10-08 17:31:06.227395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 [2024-10-08 17:31:06.912167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 Malloc1 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 [2024-10-08 17:31:07.059700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:15.182 { 00:15:15.182 "name": "Malloc1", 00:15:15.182 "aliases": [ 00:15:15.182 "2ac686b4-14d0-4fa9-bec5-d83bde5905c1" 00:15:15.182 ], 00:15:15.182 "product_name": "Malloc disk", 00:15:15.182 "block_size": 512, 00:15:15.182 "num_blocks": 1048576, 00:15:15.182 "uuid": "2ac686b4-14d0-4fa9-bec5-d83bde5905c1", 00:15:15.182 "assigned_rate_limits": { 00:15:15.182 "rw_ios_per_sec": 0, 00:15:15.182 "rw_mbytes_per_sec": 0, 00:15:15.182 "r_mbytes_per_sec": 0, 00:15:15.182 "w_mbytes_per_sec": 0 00:15:15.182 }, 00:15:15.182 "claimed": true, 00:15:15.182 "claim_type": "exclusive_write", 00:15:15.182 "zoned": false, 00:15:15.182 "supported_io_types": { 00:15:15.182 "read": true, 00:15:15.182 "write": true, 00:15:15.182 "unmap": true, 00:15:15.182 "flush": true, 00:15:15.182 "reset": true, 00:15:15.182 "nvme_admin": false, 00:15:15.182 "nvme_io": false, 00:15:15.182 "nvme_io_md": false, 00:15:15.182 "write_zeroes": true, 00:15:15.182 "zcopy": true, 00:15:15.182 "get_zone_info": false, 00:15:15.182 "zone_management": false, 00:15:15.182 "zone_append": false, 00:15:15.182 "compare": false, 00:15:15.182 "compare_and_write": false, 00:15:15.182 "abort": true, 00:15:15.182 "seek_hole": false, 00:15:15.182 "seek_data": false, 00:15:15.182 "copy": true, 00:15:15.182 "nvme_iov_md": false 00:15:15.182 }, 00:15:15.182 "memory_domains": [ 00:15:15.182 { 00:15:15.182 "dma_device_id": "system", 00:15:15.182 "dma_device_type": 1 00:15:15.182 }, 00:15:15.182 { 00:15:15.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.182 "dma_device_type": 2 00:15:15.182 } 00:15:15.182 ], 00:15:15.182 "driver_specific": {} 00:15:15.182 } 00:15:15.182 ]' 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:15.182 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:15.444 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:15.444 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:15.444 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:15.444 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:15.444 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.830 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.830 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.830 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.830 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:16.830 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:18.742 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:19.002 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:19.944 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.884 ************************************ 00:15:20.884 START TEST filesystem_ext4 00:15:20.884 ************************************ 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:15:20.884 17:31:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:20.884 mke2fs 1.47.0 (5-Feb-2023) 00:15:20.884 Discarding device blocks: 0/522240 done 00:15:20.884 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:20.884 Filesystem UUID: 6c55be99-026f-4f5c-8509-7b37eb4e5b15 00:15:20.884 Superblock backups stored on blocks: 00:15:20.884 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:20.884 00:15:20.884 Allocating group tables: 0/64 done 00:15:20.884 Writing inode tables: 0/64 done 00:15:21.145 Creating journal (8192 blocks): done 00:15:23.360 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:15:23.360 00:15:23.360 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:23.360 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:28.646 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 250944 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:28.908 00:15:28.908 real 0m8.006s 00:15:28.908 user 0m0.026s 00:15:28.908 sys 0m0.130s 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 ************************************ 00:15:28.908 END TEST filesystem_ext4 00:15:28.908 ************************************ 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 ************************************ 00:15:28.908 START TEST filesystem_btrfs 00:15:28.908 ************************************ 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:28.908 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:29.170 btrfs-progs v6.8.1 00:15:29.170 See https://btrfs.readthedocs.io for more information. 00:15:29.170 00:15:29.170 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:29.170 NOTE: several default settings have changed in version 5.15, please make sure 00:15:29.170 this does not affect your deployments: 00:15:29.170 - DUP for metadata (-m dup) 00:15:29.170 - enabled no-holes (-O no-holes) 00:15:29.170 - enabled free-space-tree (-R free-space-tree) 00:15:29.170 00:15:29.170 Label: (null) 00:15:29.170 UUID: 91e4c033-6651-4650-ad68-4785d74f462e 00:15:29.170 Node size: 16384 00:15:29.170 Sector size: 4096 (CPU page size: 4096) 00:15:29.170 Filesystem size: 510.00MiB 00:15:29.170 Block group profiles: 00:15:29.170 Data: single 8.00MiB 00:15:29.170 Metadata: DUP 32.00MiB 00:15:29.170 System: DUP 8.00MiB 00:15:29.170 SSD detected: yes 00:15:29.170 Zoned device: no 00:15:29.170 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:29.170 Checksum: crc32c 00:15:29.170 Number of devices: 1 00:15:29.170 Devices: 00:15:29.170 ID SIZE PATH 00:15:29.170 1 510.00MiB /dev/nvme0n1p1 00:15:29.170 00:15:29.170 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:29.170 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:29.742 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 250944 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:29.743 00:15:29.743 real 0m0.778s 00:15:29.743 user 0m0.027s 00:15:29.743 sys 0m0.174s 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:29.743 ************************************ 00:15:29.743 END TEST filesystem_btrfs 00:15:29.743 ************************************ 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:29.743 ************************************ 00:15:29.743 START TEST filesystem_xfs 00:15:29.743 ************************************ 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:29.743 17:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:29.743 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:29.743 = sectsz=512 attr=2, projid32bit=1 00:15:29.743 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:29.743 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:29.743 data = bsize=4096 blocks=130560, imaxpct=25 00:15:29.743 = sunit=0 swidth=0 blks 00:15:29.743 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:29.743 log =internal log bsize=4096 blocks=16384, version=2 00:15:29.743 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:29.743 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:31.130 Discarding blocks...Done. 00:15:31.130 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:31.130 17:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:34.429 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 250944 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:34.691 00:15:34.691 real 0m4.899s 00:15:34.691 user 0m0.034s 00:15:34.691 sys 0m0.124s 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:34.691 ************************************ 00:15:34.691 END TEST filesystem_xfs 00:15:34.691 ************************************ 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:34.691 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:35.265 17:31:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 250944 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 250944 ']' 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 250944 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250944 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250944' 00:15:35.265 killing process with pid 250944 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 250944 00:15:35.265 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 250944 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:35.527 00:15:35.527 real 0m21.419s 00:15:35.527 user 1m24.488s 00:15:35.527 sys 0m1.678s 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.527 ************************************ 00:15:35.527 END TEST nvmf_filesystem_no_in_capsule 00:15:35.527 ************************************ 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:35.527 ************************************ 00:15:35.527 START TEST nvmf_filesystem_in_capsule 00:15:35.527 ************************************ 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=255536 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 255536 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 255536 ']' 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.527 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:35.787 [2024-10-08 17:31:27.530063] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:15:35.787 [2024-10-08 17:31:27.530120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.787 [2024-10-08 17:31:27.615253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.787 [2024-10-08 17:31:27.676459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.787 [2024-10-08 17:31:27.676493] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.787 [2024-10-08 17:31:27.676499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.787 [2024-10-08 17:31:27.676504] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.787 [2024-10-08 17:31:27.676508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.787 [2024-10-08 17:31:27.677853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.787 [2024-10-08 17:31:27.678020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.787 [2024-10-08 17:31:27.678092] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.787 [2024-10-08 17:31:27.678094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.360 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.360 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:15:36.360 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:36.360 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.360 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 [2024-10-08 17:31:28.381448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 [2024-10-08 17:31:28.507562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.620 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:36.620 { 00:15:36.620 "name": "Malloc1", 00:15:36.620 "aliases": [ 00:15:36.620 "ba6e0eec-c1d6-4860-ab91-7ab3007b6542" 00:15:36.620 ], 00:15:36.620 "product_name": "Malloc disk", 00:15:36.620 "block_size": 512, 00:15:36.620 "num_blocks": 1048576, 00:15:36.620 "uuid": "ba6e0eec-c1d6-4860-ab91-7ab3007b6542", 00:15:36.620 "assigned_rate_limits": { 00:15:36.620 "rw_ios_per_sec": 0, 00:15:36.620 "rw_mbytes_per_sec": 0, 00:15:36.620 "r_mbytes_per_sec": 0, 00:15:36.620 "w_mbytes_per_sec": 0 00:15:36.620 }, 00:15:36.620 "claimed": true, 00:15:36.620 "claim_type": "exclusive_write", 00:15:36.620 "zoned": false, 00:15:36.620 "supported_io_types": { 00:15:36.620 "read": true, 00:15:36.620 "write": true, 00:15:36.620 "unmap": true, 00:15:36.620 "flush": true, 00:15:36.620 "reset": true, 00:15:36.620 "nvme_admin": false, 00:15:36.620 "nvme_io": false, 00:15:36.620 "nvme_io_md": false, 00:15:36.620 "write_zeroes": true, 00:15:36.620 "zcopy": true, 00:15:36.620 "get_zone_info": false, 00:15:36.620 "zone_management": false, 00:15:36.620 "zone_append": false, 00:15:36.620 "compare": false, 00:15:36.620 "compare_and_write": false, 00:15:36.621 "abort": true, 00:15:36.621 "seek_hole": false, 00:15:36.621 "seek_data": false, 00:15:36.621 "copy": true, 00:15:36.621 "nvme_iov_md": false 00:15:36.621 }, 00:15:36.621 "memory_domains": [ 00:15:36.621 { 00:15:36.621 "dma_device_id": "system", 00:15:36.621 "dma_device_type": 1 00:15:36.621 }, 00:15:36.621 { 00:15:36.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.621 "dma_device_type": 2 00:15:36.621 } 00:15:36.621 ], 00:15:36.621 "driver_specific": {} 00:15:36.621 } 00:15:36.621 ]' 00:15:36.621 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:36.621 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:15:36.621 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:36.881 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:15:36.881 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:15:36.881 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:15:36.881 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:36.881 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.264 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.264 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.264 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.264 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:38.264 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:40.811 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:42.198 ************************************ 00:15:42.198 START TEST filesystem_in_capsule_ext4 00:15:42.198 ************************************ 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:15:42.198 17:31:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:42.198 mke2fs 1.47.0 (5-Feb-2023) 00:15:42.198 Discarding device blocks: 0/522240 done 00:15:42.198 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:42.198 Filesystem UUID: d3cece68-f6fa-473b-923c-8f1899019b64 00:15:42.198 Superblock backups stored on blocks: 00:15:42.198 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:42.198 00:15:42.198 Allocating group tables: 0/64 done 00:15:42.198 Writing inode tables: 0/64 done 00:15:42.198 Creating journal (8192 blocks): done 00:15:43.343 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:15:43.343 00:15:43.343 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:43.343 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 255536 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:49.926 00:15:49.926 real 0m7.612s 00:15:49.926 user 0m0.020s 00:15:49.926 sys 0m0.086s 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.926 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:49.926 ************************************ 00:15:49.927 END TEST filesystem_in_capsule_ext4 00:15:49.927 ************************************ 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 ************************************ 00:15:49.927 START TEST filesystem_in_capsule_btrfs 00:15:49.927 ************************************ 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:49.927 btrfs-progs v6.8.1 00:15:49.927 See https://btrfs.readthedocs.io for more information. 00:15:49.927 00:15:49.927 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:49.927 NOTE: several default settings have changed in version 5.15, please make sure 00:15:49.927 this does not affect your deployments: 00:15:49.927 - DUP for metadata (-m dup) 00:15:49.927 - enabled no-holes (-O no-holes) 00:15:49.927 - enabled free-space-tree (-R free-space-tree) 00:15:49.927 00:15:49.927 Label: (null) 00:15:49.927 UUID: 42d4ffe6-a95f-4edd-b301-00f1e2380df4 00:15:49.927 Node size: 16384 00:15:49.927 Sector size: 4096 (CPU page size: 4096) 00:15:49.927 Filesystem size: 510.00MiB 00:15:49.927 Block group profiles: 00:15:49.927 Data: single 8.00MiB 00:15:49.927 Metadata: DUP 32.00MiB 00:15:49.927 System: DUP 8.00MiB 00:15:49.927 SSD detected: yes 00:15:49.927 Zoned device: no 00:15:49.927 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:49.927 Checksum: crc32c 00:15:49.927 Number of devices: 1 00:15:49.927 Devices: 00:15:49.927 ID SIZE PATH 00:15:49.927 1 510.00MiB /dev/nvme0n1p1 00:15:49.927 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:49.927 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 255536 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:50.868 00:15:50.868 real 0m1.267s 00:15:50.868 user 0m0.032s 00:15:50.868 sys 0m0.117s 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.868 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:50.868 ************************************ 00:15:50.868 END TEST filesystem_in_capsule_btrfs 00:15:50.869 ************************************ 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.869 ************************************ 00:15:50.869 START TEST filesystem_in_capsule_xfs 00:15:50.869 ************************************ 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:50.869 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:51.129 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:51.129 = sectsz=512 attr=2, projid32bit=1 00:15:51.129 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:51.129 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:51.129 data = bsize=4096 blocks=130560, imaxpct=25 00:15:51.129 = sunit=0 swidth=0 blks 00:15:51.129 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:51.129 log =internal log bsize=4096 blocks=16384, version=2 00:15:51.129 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:51.129 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:51.701 Discarding blocks...Done. 00:15:51.701 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:51.701 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 255536 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:54.246 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:54.507 00:15:54.507 real 0m3.425s 00:15:54.507 user 0m0.036s 00:15:54.507 sys 0m0.071s 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:54.507 ************************************ 00:15:54.507 END TEST filesystem_in_capsule_xfs 00:15:54.507 ************************************ 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:54.507 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 255536 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 255536 ']' 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 255536 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.768 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 255536 00:15:54.769 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.769 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.769 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 255536' 00:15:54.769 killing process with pid 255536 00:15:54.769 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 255536 00:15:54.769 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 255536 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:55.030 00:15:55.030 real 0m19.424s 00:15:55.030 user 1m16.751s 00:15:55.030 sys 0m1.424s 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.030 ************************************ 00:15:55.030 END TEST nvmf_filesystem_in_capsule 00:15:55.030 ************************************ 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.030 rmmod nvme_tcp 00:15:55.030 rmmod nvme_fabrics 00:15:55.030 rmmod nvme_keyring 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.030 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:57.577 00:15:57.577 real 0m51.347s 00:15:57.577 user 2m43.733s 00:15:57.577 sys 0m9.059s 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 ************************************ 00:15:57.577 END TEST nvmf_filesystem 00:15:57.577 ************************************ 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 ************************************ 00:15:57.577 START TEST nvmf_target_discovery 00:15:57.577 ************************************ 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:57.577 * Looking for test storage... 00:15:57.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.577 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.578 --rc genhtml_branch_coverage=1 00:15:57.578 --rc genhtml_function_coverage=1 00:15:57.578 --rc genhtml_legend=1 00:15:57.578 --rc geninfo_all_blocks=1 00:15:57.578 --rc geninfo_unexecuted_blocks=1 00:15:57.578 00:15:57.578 ' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.578 --rc genhtml_branch_coverage=1 00:15:57.578 --rc genhtml_function_coverage=1 00:15:57.578 --rc genhtml_legend=1 00:15:57.578 --rc geninfo_all_blocks=1 00:15:57.578 --rc geninfo_unexecuted_blocks=1 00:15:57.578 00:15:57.578 ' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.578 --rc genhtml_branch_coverage=1 00:15:57.578 --rc genhtml_function_coverage=1 00:15:57.578 --rc genhtml_legend=1 00:15:57.578 --rc geninfo_all_blocks=1 00:15:57.578 --rc geninfo_unexecuted_blocks=1 00:15:57.578 00:15:57.578 ' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.578 --rc genhtml_branch_coverage=1 00:15:57.578 --rc genhtml_function_coverage=1 00:15:57.578 --rc genhtml_legend=1 00:15:57.578 --rc geninfo_all_blocks=1 00:15:57.578 --rc geninfo_unexecuted_blocks=1 00:15:57.578 00:15:57.578 ' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:57.578 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:57.579 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:05.725 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:05.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:05.725 Found net devices under 0000:31:00.0: cvl_0_0 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:05.725 Found net devices under 0000:31:00.1: cvl_0_1 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:16:05.725 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:05.726 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:05.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:16:05.726 00:16:05.726 --- 10.0.0.2 ping statistics --- 00:16:05.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.726 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:16:05.726 00:16:05.726 --- 10.0.0.1 ping statistics --- 00:16:05.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.726 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=263850 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 263850 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 263850 ']' 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.726 17:31:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.726 [2024-10-08 17:31:57.181790] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:16:05.726 [2024-10-08 17:31:57.181858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.726 [2024-10-08 17:31:57.272175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.726 [2024-10-08 17:31:57.366766] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.726 [2024-10-08 17:31:57.366827] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.726 [2024-10-08 17:31:57.366837] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.726 [2024-10-08 17:31:57.366843] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.726 [2024-10-08 17:31:57.366850] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.726 [2024-10-08 17:31:57.368997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.726 [2024-10-08 17:31:57.369137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.726 [2024-10-08 17:31:57.369376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.726 [2024-10-08 17:31:57.369380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 [2024-10-08 17:31:58.066236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 Null1 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 [2024-10-08 17:31:58.138734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 Null2 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 Null3 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 Null4 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.300 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.561 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:06.561 00:16:06.561 Discovery Log Number of Records 6, Generation counter 6 00:16:06.561 =====Discovery Log Entry 0====== 00:16:06.561 trtype: tcp 00:16:06.561 adrfam: ipv4 00:16:06.561 subtype: current discovery subsystem 00:16:06.561 treq: not required 00:16:06.561 portid: 0 00:16:06.561 trsvcid: 4420 00:16:06.561 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:06.561 traddr: 10.0.0.2 00:16:06.561 eflags: explicit discovery connections, duplicate discovery information 00:16:06.561 sectype: none 00:16:06.561 =====Discovery Log Entry 1====== 00:16:06.561 trtype: tcp 00:16:06.561 adrfam: ipv4 00:16:06.561 subtype: nvme subsystem 00:16:06.561 treq: not required 00:16:06.562 portid: 0 00:16:06.562 trsvcid: 4420 00:16:06.562 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:06.562 traddr: 10.0.0.2 00:16:06.562 eflags: none 00:16:06.562 sectype: none 00:16:06.562 =====Discovery Log Entry 2====== 00:16:06.562 trtype: tcp 00:16:06.562 adrfam: ipv4 00:16:06.562 subtype: nvme subsystem 00:16:06.562 treq: not required 00:16:06.562 portid: 0 00:16:06.562 trsvcid: 4420 00:16:06.562 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:06.562 traddr: 10.0.0.2 00:16:06.562 eflags: none 00:16:06.562 sectype: none 00:16:06.562 =====Discovery Log Entry 3====== 00:16:06.562 trtype: tcp 00:16:06.562 adrfam: ipv4 00:16:06.562 subtype: nvme subsystem 00:16:06.562 treq: not required 00:16:06.562 portid: 0 00:16:06.562 trsvcid: 4420 00:16:06.562 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:06.562 traddr: 10.0.0.2 00:16:06.562 eflags: none 00:16:06.562 sectype: none 00:16:06.562 =====Discovery Log Entry 4====== 00:16:06.562 trtype: tcp 00:16:06.562 adrfam: ipv4 00:16:06.562 subtype: nvme subsystem 00:16:06.562 treq: not required 00:16:06.562 portid: 0 00:16:06.562 trsvcid: 4420 00:16:06.562 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:06.562 traddr: 10.0.0.2 00:16:06.562 eflags: none 00:16:06.562 sectype: none 00:16:06.562 =====Discovery Log Entry 5====== 00:16:06.562 trtype: tcp 00:16:06.562 adrfam: ipv4 00:16:06.562 subtype: discovery subsystem referral 00:16:06.562 treq: not required 00:16:06.562 portid: 0 00:16:06.562 trsvcid: 4430 00:16:06.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:06.562 traddr: 10.0.0.2 00:16:06.562 eflags: none 00:16:06.562 sectype: none 00:16:06.562 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:06.562 Perform nvmf subsystem discovery via RPC 00:16:06.562 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:06.562 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.562 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.562 [ 00:16:06.562 { 00:16:06.562 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.562 "subtype": "Discovery", 00:16:06.562 "listen_addresses": [ 00:16:06.562 { 00:16:06.562 "trtype": "TCP", 00:16:06.562 "adrfam": "IPv4", 00:16:06.562 "traddr": "10.0.0.2", 00:16:06.562 "trsvcid": "4420" 00:16:06.562 } 00:16:06.562 ], 00:16:06.562 "allow_any_host": true, 00:16:06.562 "hosts": [] 00:16:06.562 }, 00:16:06.562 { 00:16:06.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.562 "subtype": "NVMe", 00:16:06.562 "listen_addresses": [ 00:16:06.562 { 00:16:06.562 "trtype": "TCP", 00:16:06.562 "adrfam": "IPv4", 00:16:06.562 "traddr": "10.0.0.2", 00:16:06.562 "trsvcid": "4420" 00:16:06.562 } 00:16:06.562 ], 00:16:06.562 "allow_any_host": true, 00:16:06.562 "hosts": [], 00:16:06.562 "serial_number": "SPDK00000000000001", 00:16:06.562 "model_number": "SPDK bdev Controller", 00:16:06.562 "max_namespaces": 32, 00:16:06.562 "min_cntlid": 1, 00:16:06.562 "max_cntlid": 65519, 00:16:06.562 "namespaces": [ 00:16:06.562 { 00:16:06.562 "nsid": 1, 00:16:06.562 "bdev_name": "Null1", 00:16:06.562 "name": "Null1", 00:16:06.562 "nguid": "06ECC37F525741D29137F9D50BDAB027", 00:16:06.562 "uuid": "06ecc37f-5257-41d2-9137-f9d50bdab027" 00:16:06.562 } 00:16:06.562 ] 00:16:06.562 }, 00:16:06.562 { 00:16:06.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:06.562 "subtype": "NVMe", 00:16:06.562 "listen_addresses": [ 00:16:06.562 { 00:16:06.562 "trtype": "TCP", 00:16:06.562 "adrfam": "IPv4", 00:16:06.562 "traddr": "10.0.0.2", 00:16:06.562 "trsvcid": "4420" 00:16:06.562 } 00:16:06.562 ], 00:16:06.562 "allow_any_host": true, 00:16:06.562 "hosts": [], 00:16:06.562 "serial_number": "SPDK00000000000002", 00:16:06.562 "model_number": "SPDK bdev Controller", 00:16:06.562 "max_namespaces": 32, 00:16:06.562 "min_cntlid": 1, 00:16:06.562 "max_cntlid": 65519, 00:16:06.562 "namespaces": [ 00:16:06.562 { 00:16:06.562 "nsid": 1, 00:16:06.562 "bdev_name": "Null2", 00:16:06.562 "name": "Null2", 00:16:06.562 "nguid": "9FF33BE090CC40928FA0360BFCF4529B", 00:16:06.562 "uuid": "9ff33be0-90cc-4092-8fa0-360bfcf4529b" 00:16:06.562 } 00:16:06.562 ] 00:16:06.562 }, 00:16:06.562 { 00:16:06.562 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:06.562 "subtype": "NVMe", 00:16:06.562 "listen_addresses": [ 00:16:06.562 { 00:16:06.562 "trtype": "TCP", 00:16:06.562 "adrfam": "IPv4", 00:16:06.562 "traddr": "10.0.0.2", 00:16:06.562 "trsvcid": "4420" 00:16:06.562 } 00:16:06.562 ], 00:16:06.562 "allow_any_host": true, 00:16:06.562 "hosts": [], 00:16:06.562 "serial_number": "SPDK00000000000003", 00:16:06.562 "model_number": "SPDK bdev Controller", 00:16:06.562 "max_namespaces": 32, 00:16:06.562 "min_cntlid": 1, 00:16:06.562 "max_cntlid": 65519, 00:16:06.562 "namespaces": [ 00:16:06.562 { 00:16:06.562 "nsid": 1, 00:16:06.562 "bdev_name": "Null3", 00:16:06.562 "name": "Null3", 00:16:06.562 "nguid": "FD37AE1628C34AD7A84194D070ABD79C", 00:16:06.824 "uuid": "fd37ae16-28c3-4ad7-a841-94d070abd79c" 00:16:06.824 } 00:16:06.824 ] 00:16:06.824 }, 00:16:06.824 { 00:16:06.824 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:06.824 "subtype": "NVMe", 00:16:06.824 "listen_addresses": [ 00:16:06.824 { 00:16:06.824 "trtype": "TCP", 00:16:06.824 "adrfam": "IPv4", 00:16:06.824 "traddr": "10.0.0.2", 00:16:06.824 "trsvcid": "4420" 00:16:06.824 } 00:16:06.824 ], 00:16:06.824 "allow_any_host": true, 00:16:06.824 "hosts": [], 00:16:06.824 "serial_number": "SPDK00000000000004", 00:16:06.824 "model_number": "SPDK bdev Controller", 00:16:06.824 "max_namespaces": 32, 00:16:06.824 "min_cntlid": 1, 00:16:06.824 "max_cntlid": 65519, 00:16:06.824 "namespaces": [ 00:16:06.824 { 00:16:06.824 "nsid": 1, 00:16:06.824 "bdev_name": "Null4", 00:16:06.824 "name": "Null4", 00:16:06.824 "nguid": "C74BFEE84DDA43DBBF14DB78B6C63749", 00:16:06.824 "uuid": "c74bfee8-4dda-43db-bf14-db78b6c63749" 00:16:06.824 } 00:16:06.824 ] 00:16:06.824 } 00:16:06.824 ] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:06.824 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:06.824 rmmod nvme_tcp 00:16:06.824 rmmod nvme_fabrics 00:16:06.824 rmmod nvme_keyring 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 263850 ']' 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 263850 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 263850 ']' 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 263850 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.825 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 263850 00:16:07.087 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.087 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.087 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 263850' 00:16:07.087 killing process with pid 263850 00:16:07.087 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 263850 00:16:07.087 17:31:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 263850 00:16:07.087 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:07.087 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:07.087 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:07.087 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.088 17:31:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:09.636 00:16:09.636 real 0m11.977s 00:16:09.636 user 0m9.054s 00:16:09.636 sys 0m6.284s 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.636 ************************************ 00:16:09.636 END TEST nvmf_target_discovery 00:16:09.636 ************************************ 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.636 ************************************ 00:16:09.636 START TEST nvmf_referrals 00:16:09.636 ************************************ 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:09.636 * Looking for test storage... 00:16:09.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.636 --rc genhtml_branch_coverage=1 00:16:09.636 --rc genhtml_function_coverage=1 00:16:09.636 --rc genhtml_legend=1 00:16:09.636 --rc geninfo_all_blocks=1 00:16:09.636 --rc geninfo_unexecuted_blocks=1 00:16:09.636 00:16:09.636 ' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.636 --rc genhtml_branch_coverage=1 00:16:09.636 --rc genhtml_function_coverage=1 00:16:09.636 --rc genhtml_legend=1 00:16:09.636 --rc geninfo_all_blocks=1 00:16:09.636 --rc geninfo_unexecuted_blocks=1 00:16:09.636 00:16:09.636 ' 00:16:09.636 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.636 --rc genhtml_branch_coverage=1 00:16:09.636 --rc genhtml_function_coverage=1 00:16:09.636 --rc genhtml_legend=1 00:16:09.636 --rc geninfo_all_blocks=1 00:16:09.637 --rc geninfo_unexecuted_blocks=1 00:16:09.637 00:16:09.637 ' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.637 --rc genhtml_branch_coverage=1 00:16:09.637 --rc genhtml_function_coverage=1 00:16:09.637 --rc genhtml_legend=1 00:16:09.637 --rc geninfo_all_blocks=1 00:16:09.637 --rc geninfo_unexecuted_blocks=1 00:16:09.637 00:16:09.637 ' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:09.637 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.786 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:17.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:17.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:17.787 Found net devices under 0000:31:00.0: cvl_0_0 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:17.787 Found net devices under 0000:31:00.1: cvl_0_1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:17.787 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:17.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:16:17.787 00:16:17.787 --- 10.0.0.2 ping statistics --- 00:16:17.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.787 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:16:17.787 00:16:17.787 --- 10.0.0.1 ping statistics --- 00:16:17.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.787 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=268477 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 268477 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 268477 ']' 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.787 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:17.787 [2024-10-08 17:32:09.188487] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:16:17.787 [2024-10-08 17:32:09.188552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.787 [2024-10-08 17:32:09.278145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.787 [2024-10-08 17:32:09.374855] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.788 [2024-10-08 17:32:09.374916] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.788 [2024-10-08 17:32:09.374924] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.788 [2024-10-08 17:32:09.374932] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.788 [2024-10-08 17:32:09.374938] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.788 [2024-10-08 17:32:09.376939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.788 [2024-10-08 17:32:09.377103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.788 [2024-10-08 17:32:09.377151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.788 [2024-10-08 17:32:09.377151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.050 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.050 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:16:18.050 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:18.050 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.050 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 [2024-10-08 17:32:10.071015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 [2024-10-08 17:32:10.087272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:18.312 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:18.574 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:18.835 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:18.836 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:19.097 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:19.358 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:19.358 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:19.358 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:19.358 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:19.358 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:19.359 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:19.620 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:19.881 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:20.143 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:20.143 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.405 rmmod nvme_tcp 00:16:20.405 rmmod nvme_fabrics 00:16:20.405 rmmod nvme_keyring 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 268477 ']' 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 268477 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 268477 ']' 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 268477 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.405 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268477 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268477' 00:16:20.666 killing process with pid 268477 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 268477 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 268477 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.666 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:23.213 00:16:23.213 real 0m13.450s 00:16:23.213 user 0m15.889s 00:16:23.213 sys 0m6.652s 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 ************************************ 00:16:23.213 END TEST nvmf_referrals 00:16:23.213 ************************************ 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 ************************************ 00:16:23.213 START TEST nvmf_connect_disconnect 00:16:23.213 ************************************ 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:23.213 * Looking for test storage... 00:16:23.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:23.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.213 --rc genhtml_branch_coverage=1 00:16:23.213 --rc genhtml_function_coverage=1 00:16:23.213 --rc genhtml_legend=1 00:16:23.213 --rc geninfo_all_blocks=1 00:16:23.213 --rc geninfo_unexecuted_blocks=1 00:16:23.213 00:16:23.213 ' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:23.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.213 --rc genhtml_branch_coverage=1 00:16:23.213 --rc genhtml_function_coverage=1 00:16:23.213 --rc genhtml_legend=1 00:16:23.213 --rc geninfo_all_blocks=1 00:16:23.213 --rc geninfo_unexecuted_blocks=1 00:16:23.213 00:16:23.213 ' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:23.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.213 --rc genhtml_branch_coverage=1 00:16:23.213 --rc genhtml_function_coverage=1 00:16:23.213 --rc genhtml_legend=1 00:16:23.213 --rc geninfo_all_blocks=1 00:16:23.213 --rc geninfo_unexecuted_blocks=1 00:16:23.213 00:16:23.213 ' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:23.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.213 --rc genhtml_branch_coverage=1 00:16:23.213 --rc genhtml_function_coverage=1 00:16:23.213 --rc genhtml_legend=1 00:16:23.213 --rc geninfo_all_blocks=1 00:16:23.213 --rc geninfo_unexecuted_blocks=1 00:16:23.213 00:16:23.213 ' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.213 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:23.214 17:32:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:31.359 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:31.360 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:31.360 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:31.360 Found net devices under 0000:31:00.0: cvl_0_0 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:31.360 Found net devices under 0000:31:00.1: cvl_0_1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:31.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:16:31.360 00:16:31.360 --- 10.0.0.2 ping statistics --- 00:16:31.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.360 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:16:31.360 00:16:31.360 --- 10.0.0.1 ping statistics --- 00:16:31.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.360 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.360 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=273465 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 273465 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 273465 ']' 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.361 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.361 [2024-10-08 17:32:22.717075] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:16:31.361 [2024-10-08 17:32:22.717138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.361 [2024-10-08 17:32:22.807367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.361 [2024-10-08 17:32:22.903661] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.361 [2024-10-08 17:32:22.903717] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.361 [2024-10-08 17:32:22.903726] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.361 [2024-10-08 17:32:22.903738] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.361 [2024-10-08 17:32:22.903744] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.361 [2024-10-08 17:32:22.905890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.361 [2024-10-08 17:32:22.906053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.361 [2024-10-08 17:32:22.906108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.361 [2024-10-08 17:32:22.906108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.622 [2024-10-08 17:32:23.599225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.622 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.884 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 [2024-10-08 17:32:23.668860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:31.885 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:36.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.205 rmmod nvme_tcp 00:16:50.205 rmmod nvme_fabrics 00:16:50.205 rmmod nvme_keyring 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 273465 ']' 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 273465 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 273465 ']' 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 273465 00:16:50.205 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 273465 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 273465' 00:16:50.205 killing process with pid 273465 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 273465 00:16:50.205 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 273465 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.466 17:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.380 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:52.380 00:16:52.380 real 0m29.540s 00:16:52.380 user 1m19.121s 00:16:52.380 sys 0m7.188s 00:16:52.380 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.380 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:52.380 ************************************ 00:16:52.380 END TEST nvmf_connect_disconnect 00:16:52.380 ************************************ 00:16:52.381 17:32:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:52.381 17:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:52.381 17:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.381 17:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.381 ************************************ 00:16:52.381 START TEST nvmf_multitarget 00:16:52.381 ************************************ 00:16:52.381 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:52.642 * Looking for test storage... 00:16:52.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.642 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.643 --rc genhtml_branch_coverage=1 00:16:52.643 --rc genhtml_function_coverage=1 00:16:52.643 --rc genhtml_legend=1 00:16:52.643 --rc geninfo_all_blocks=1 00:16:52.643 --rc geninfo_unexecuted_blocks=1 00:16:52.643 00:16:52.643 ' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.643 --rc genhtml_branch_coverage=1 00:16:52.643 --rc genhtml_function_coverage=1 00:16:52.643 --rc genhtml_legend=1 00:16:52.643 --rc geninfo_all_blocks=1 00:16:52.643 --rc geninfo_unexecuted_blocks=1 00:16:52.643 00:16:52.643 ' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.643 --rc genhtml_branch_coverage=1 00:16:52.643 --rc genhtml_function_coverage=1 00:16:52.643 --rc genhtml_legend=1 00:16:52.643 --rc geninfo_all_blocks=1 00:16:52.643 --rc geninfo_unexecuted_blocks=1 00:16:52.643 00:16:52.643 ' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.643 --rc genhtml_branch_coverage=1 00:16:52.643 --rc genhtml_function_coverage=1 00:16:52.643 --rc genhtml_legend=1 00:16:52.643 --rc geninfo_all_blocks=1 00:16:52.643 --rc geninfo_unexecuted_blocks=1 00:16:52.643 00:16:52.643 ' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:52.643 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.788 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:00.788 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:00.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:00.789 Found net devices under 0000:31:00.0: cvl_0_0 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:00.789 Found net devices under 0000:31:00.1: cvl_0_1 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.789 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:17:00.789 00:17:00.789 --- 10.0.0.2 ping statistics --- 00:17:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.789 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:17:00.789 00:17:00.789 --- 10.0.0.1 ping statistics --- 00:17:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.789 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=281644 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 281644 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 281644 ']' 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.789 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.789 [2024-10-08 17:32:52.356996] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:17:00.789 [2024-10-08 17:32:52.357064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.789 [2024-10-08 17:32:52.448678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.789 [2024-10-08 17:32:52.544642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.789 [2024-10-08 17:32:52.544704] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.789 [2024-10-08 17:32:52.544713] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.789 [2024-10-08 17:32:52.544720] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.789 [2024-10-08 17:32:52.544727] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.789 [2024-10-08 17:32:52.547005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.789 [2024-10-08 17:32:52.547110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.789 [2024-10-08 17:32:52.547384] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.789 [2024-10-08 17:32:52.547387] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:01.363 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:01.625 "nvmf_tgt_1" 00:17:01.625 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:01.625 "nvmf_tgt_2" 00:17:01.625 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:01.625 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:01.886 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:01.886 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:01.886 true 00:17:01.886 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:02.147 true 00:17:02.147 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:02.147 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:02.147 rmmod nvme_tcp 00:17:02.147 rmmod nvme_fabrics 00:17:02.147 rmmod nvme_keyring 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 281644 ']' 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 281644 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 281644 ']' 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 281644 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.147 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 281644 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 281644' 00:17:02.409 killing process with pid 281644 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 281644 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 281644 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.409 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:04.958 00:17:04.958 real 0m12.082s 00:17:04.958 user 0m10.225s 00:17:04.958 sys 0m6.329s 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.958 ************************************ 00:17:04.958 END TEST nvmf_multitarget 00:17:04.958 ************************************ 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:04.958 ************************************ 00:17:04.958 START TEST nvmf_rpc 00:17:04.958 ************************************ 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:04.958 * Looking for test storage... 00:17:04.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.958 --rc genhtml_branch_coverage=1 00:17:04.958 --rc genhtml_function_coverage=1 00:17:04.958 --rc genhtml_legend=1 00:17:04.958 --rc geninfo_all_blocks=1 00:17:04.958 --rc geninfo_unexecuted_blocks=1 00:17:04.958 00:17:04.958 ' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.958 --rc genhtml_branch_coverage=1 00:17:04.958 --rc genhtml_function_coverage=1 00:17:04.958 --rc genhtml_legend=1 00:17:04.958 --rc geninfo_all_blocks=1 00:17:04.958 --rc geninfo_unexecuted_blocks=1 00:17:04.958 00:17:04.958 ' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.958 --rc genhtml_branch_coverage=1 00:17:04.958 --rc genhtml_function_coverage=1 00:17:04.958 --rc genhtml_legend=1 00:17:04.958 --rc geninfo_all_blocks=1 00:17:04.958 --rc geninfo_unexecuted_blocks=1 00:17:04.958 00:17:04.958 ' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.958 --rc genhtml_branch_coverage=1 00:17:04.958 --rc genhtml_function_coverage=1 00:17:04.958 --rc genhtml_legend=1 00:17:04.958 --rc geninfo_all_blocks=1 00:17:04.958 --rc geninfo_unexecuted_blocks=1 00:17:04.958 00:17:04.958 ' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.958 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:04.959 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.105 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:13.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:13.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:13.106 Found net devices under 0000:31:00.0: cvl_0_0 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:13.106 Found net devices under 0000:31:00.1: cvl_0_1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:13.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:17:13.106 00:17:13.106 --- 10.0.0.2 ping statistics --- 00:17:13.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.106 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:17:13.106 00:17:13.106 --- 10.0.0.1 ping statistics --- 00:17:13.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.106 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=286515 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 286515 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 286515 ']' 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.106 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.107 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.107 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.107 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.107 [2024-10-08 17:33:04.462894] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:17:13.107 [2024-10-08 17:33:04.462964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.107 [2024-10-08 17:33:04.552477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.107 [2024-10-08 17:33:04.651619] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.107 [2024-10-08 17:33:04.651683] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.107 [2024-10-08 17:33:04.651692] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.107 [2024-10-08 17:33:04.651699] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.107 [2024-10-08 17:33:04.651705] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.107 [2024-10-08 17:33:04.654166] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.107 [2024-10-08 17:33:04.654328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.107 [2024-10-08 17:33:04.654487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.107 [2024-10-08 17:33:04.654488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.368 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:13.368 "tick_rate": 2400000000, 00:17:13.368 "poll_groups": [ 00:17:13.368 { 00:17:13.368 "name": "nvmf_tgt_poll_group_000", 00:17:13.368 "admin_qpairs": 0, 00:17:13.368 "io_qpairs": 0, 00:17:13.368 "current_admin_qpairs": 0, 00:17:13.368 "current_io_qpairs": 0, 00:17:13.368 "pending_bdev_io": 0, 00:17:13.368 "completed_nvme_io": 0, 00:17:13.368 "transports": [] 00:17:13.368 }, 00:17:13.368 { 00:17:13.368 "name": "nvmf_tgt_poll_group_001", 00:17:13.368 "admin_qpairs": 0, 00:17:13.368 "io_qpairs": 0, 00:17:13.368 "current_admin_qpairs": 0, 00:17:13.368 "current_io_qpairs": 0, 00:17:13.368 "pending_bdev_io": 0, 00:17:13.368 "completed_nvme_io": 0, 00:17:13.368 "transports": [] 00:17:13.368 }, 00:17:13.368 { 00:17:13.368 "name": "nvmf_tgt_poll_group_002", 00:17:13.368 "admin_qpairs": 0, 00:17:13.368 "io_qpairs": 0, 00:17:13.368 "current_admin_qpairs": 0, 00:17:13.368 "current_io_qpairs": 0, 00:17:13.368 "pending_bdev_io": 0, 00:17:13.368 "completed_nvme_io": 0, 00:17:13.368 "transports": [] 00:17:13.368 }, 00:17:13.368 { 00:17:13.368 "name": "nvmf_tgt_poll_group_003", 00:17:13.368 "admin_qpairs": 0, 00:17:13.368 "io_qpairs": 0, 00:17:13.368 "current_admin_qpairs": 0, 00:17:13.368 "current_io_qpairs": 0, 00:17:13.368 "pending_bdev_io": 0, 00:17:13.368 "completed_nvme_io": 0, 00:17:13.368 "transports": [] 00:17:13.368 } 00:17:13.368 ] 00:17:13.368 }' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.630 [2024-10-08 17:33:05.466137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:13.630 "tick_rate": 2400000000, 00:17:13.630 "poll_groups": [ 00:17:13.630 { 00:17:13.630 "name": "nvmf_tgt_poll_group_000", 00:17:13.630 "admin_qpairs": 0, 00:17:13.630 "io_qpairs": 0, 00:17:13.630 "current_admin_qpairs": 0, 00:17:13.630 "current_io_qpairs": 0, 00:17:13.630 "pending_bdev_io": 0, 00:17:13.630 "completed_nvme_io": 0, 00:17:13.630 "transports": [ 00:17:13.630 { 00:17:13.630 "trtype": "TCP" 00:17:13.630 } 00:17:13.630 ] 00:17:13.630 }, 00:17:13.630 { 00:17:13.630 "name": "nvmf_tgt_poll_group_001", 00:17:13.630 "admin_qpairs": 0, 00:17:13.630 "io_qpairs": 0, 00:17:13.630 "current_admin_qpairs": 0, 00:17:13.630 "current_io_qpairs": 0, 00:17:13.630 "pending_bdev_io": 0, 00:17:13.630 "completed_nvme_io": 0, 00:17:13.630 "transports": [ 00:17:13.630 { 00:17:13.630 "trtype": "TCP" 00:17:13.630 } 00:17:13.630 ] 00:17:13.630 }, 00:17:13.630 { 00:17:13.630 "name": "nvmf_tgt_poll_group_002", 00:17:13.630 "admin_qpairs": 0, 00:17:13.630 "io_qpairs": 0, 00:17:13.630 "current_admin_qpairs": 0, 00:17:13.630 "current_io_qpairs": 0, 00:17:13.630 "pending_bdev_io": 0, 00:17:13.630 "completed_nvme_io": 0, 00:17:13.630 "transports": [ 00:17:13.630 { 00:17:13.630 "trtype": "TCP" 00:17:13.630 } 00:17:13.630 ] 00:17:13.630 }, 00:17:13.630 { 00:17:13.630 "name": "nvmf_tgt_poll_group_003", 00:17:13.630 "admin_qpairs": 0, 00:17:13.630 "io_qpairs": 0, 00:17:13.630 "current_admin_qpairs": 0, 00:17:13.630 "current_io_qpairs": 0, 00:17:13.630 "pending_bdev_io": 0, 00:17:13.630 "completed_nvme_io": 0, 00:17:13.630 "transports": [ 00:17:13.630 { 00:17:13.630 "trtype": "TCP" 00:17:13.630 } 00:17:13.630 ] 00:17:13.630 } 00:17:13.630 ] 00:17:13.630 }' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:13.630 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.631 Malloc1 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.631 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.891 [2024-10-08 17:33:05.644123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:13.891 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:13.892 [2024-10-08 17:33:05.681388] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:13.892 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:13.892 could not add new controller: failed to write to nvme-fabrics device 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.892 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.278 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.278 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.278 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.278 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.278 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.823 [2024-10-08 17:33:09.425540] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:17.823 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:17.823 could not add new controller: failed to write to nvme-fabrics device 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.823 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:19.206 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.207 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:19.207 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.207 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:19.207 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:21.119 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.380 [2024-10-08 17:33:13.189861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.380 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.381 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.381 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.381 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.381 17:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.293 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.293 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:23.293 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.293 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:23.293 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 [2024-10-08 17:33:16.931653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.603 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.603 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.603 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.603 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:26.603 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:28.522 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 [2024-10-08 17:33:20.645993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.783 17:33:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:30.700 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.700 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:30.700 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.700 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:30.700 17:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 [2024-10-08 17:33:24.463748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.617 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:34.533 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.533 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:34.533 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.533 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:34.533 17:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 [2024-10-08 17:33:28.226403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 17:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.832 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.832 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.832 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.832 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:37.832 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 [2024-10-08 17:33:31.994816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.380 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 [2024-10-08 17:33:32.062977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 [2024-10-08 17:33:32.131155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 [2024-10-08 17:33:32.203399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 [2024-10-08 17:33:32.271606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.381 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:40.381 "tick_rate": 2400000000, 00:17:40.381 "poll_groups": [ 00:17:40.381 { 00:17:40.381 "name": "nvmf_tgt_poll_group_000", 00:17:40.381 "admin_qpairs": 0, 00:17:40.381 "io_qpairs": 224, 00:17:40.381 "current_admin_qpairs": 0, 00:17:40.381 "current_io_qpairs": 0, 00:17:40.381 "pending_bdev_io": 0, 00:17:40.381 "completed_nvme_io": 224, 00:17:40.381 "transports": [ 00:17:40.381 { 00:17:40.381 "trtype": "TCP" 00:17:40.381 } 00:17:40.381 ] 00:17:40.381 }, 00:17:40.381 { 00:17:40.381 "name": "nvmf_tgt_poll_group_001", 00:17:40.381 "admin_qpairs": 1, 00:17:40.381 "io_qpairs": 223, 00:17:40.381 "current_admin_qpairs": 0, 00:17:40.381 "current_io_qpairs": 0, 00:17:40.381 "pending_bdev_io": 0, 00:17:40.381 "completed_nvme_io": 277, 00:17:40.381 "transports": [ 00:17:40.381 { 00:17:40.381 "trtype": "TCP" 00:17:40.381 } 00:17:40.381 ] 00:17:40.381 }, 00:17:40.381 { 00:17:40.381 "name": "nvmf_tgt_poll_group_002", 00:17:40.381 "admin_qpairs": 6, 00:17:40.381 "io_qpairs": 218, 00:17:40.381 "current_admin_qpairs": 0, 00:17:40.382 "current_io_qpairs": 0, 00:17:40.382 "pending_bdev_io": 0, 00:17:40.382 "completed_nvme_io": 279, 00:17:40.382 "transports": [ 00:17:40.382 { 00:17:40.382 "trtype": "TCP" 00:17:40.382 } 00:17:40.382 ] 00:17:40.382 }, 00:17:40.382 { 00:17:40.382 "name": "nvmf_tgt_poll_group_003", 00:17:40.382 "admin_qpairs": 0, 00:17:40.382 "io_qpairs": 224, 00:17:40.382 "current_admin_qpairs": 0, 00:17:40.382 "current_io_qpairs": 0, 00:17:40.382 "pending_bdev_io": 0, 00:17:40.382 "completed_nvme_io": 459, 00:17:40.382 "transports": [ 00:17:40.382 { 00:17:40.382 "trtype": "TCP" 00:17:40.382 } 00:17:40.382 ] 00:17:40.382 } 00:17:40.382 ] 00:17:40.382 }' 00:17:40.382 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:40.382 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:40.382 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:40.382 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.642 rmmod nvme_tcp 00:17:40.642 rmmod nvme_fabrics 00:17:40.642 rmmod nvme_keyring 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.642 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 286515 ']' 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 286515 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 286515 ']' 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 286515 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286515 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286515' 00:17:40.643 killing process with pid 286515 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 286515 00:17:40.643 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 286515 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.904 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.819 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.819 00:17:42.819 real 0m38.247s 00:17:42.819 user 1m54.095s 00:17:42.819 sys 0m8.028s 00:17:42.819 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.819 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.819 ************************************ 00:17:42.819 END TEST nvmf_rpc 00:17:42.819 ************************************ 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.081 ************************************ 00:17:43.081 START TEST nvmf_invalid 00:17:43.081 ************************************ 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:43.081 * Looking for test storage... 00:17:43.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:43.081 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.081 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:43.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.082 --rc genhtml_branch_coverage=1 00:17:43.082 --rc genhtml_function_coverage=1 00:17:43.082 --rc genhtml_legend=1 00:17:43.082 --rc geninfo_all_blocks=1 00:17:43.082 --rc geninfo_unexecuted_blocks=1 00:17:43.082 00:17:43.082 ' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:43.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.082 --rc genhtml_branch_coverage=1 00:17:43.082 --rc genhtml_function_coverage=1 00:17:43.082 --rc genhtml_legend=1 00:17:43.082 --rc geninfo_all_blocks=1 00:17:43.082 --rc geninfo_unexecuted_blocks=1 00:17:43.082 00:17:43.082 ' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:43.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.082 --rc genhtml_branch_coverage=1 00:17:43.082 --rc genhtml_function_coverage=1 00:17:43.082 --rc genhtml_legend=1 00:17:43.082 --rc geninfo_all_blocks=1 00:17:43.082 --rc geninfo_unexecuted_blocks=1 00:17:43.082 00:17:43.082 ' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:43.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.082 --rc genhtml_branch_coverage=1 00:17:43.082 --rc genhtml_function_coverage=1 00:17:43.082 --rc genhtml_legend=1 00:17:43.082 --rc geninfo_all_blocks=1 00:17:43.082 --rc geninfo_unexecuted_blocks=1 00:17:43.082 00:17:43.082 ' 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.082 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.344 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.345 17:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.493 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.493 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.493 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:51.493 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:17:51.494 00:17:51.494 --- 10.0.0.2 ping statistics --- 00:17:51.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.494 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:17:51.494 00:17:51.494 --- 10.0.0.1 ping statistics --- 00:17:51.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.494 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=296884 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 296884 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 296884 ']' 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.494 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 [2024-10-08 17:33:42.833635] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:17:51.494 [2024-10-08 17:33:42.833701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.494 [2024-10-08 17:33:42.920768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.494 [2024-10-08 17:33:43.017254] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.494 [2024-10-08 17:33:43.017309] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.494 [2024-10-08 17:33:43.017318] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.494 [2024-10-08 17:33:43.017326] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.494 [2024-10-08 17:33:43.017332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.494 [2024-10-08 17:33:43.019351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.494 [2024-10-08 17:33:43.019518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.494 [2024-10-08 17:33:43.019722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.494 [2024-10-08 17:33:43.019723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:51.756 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27969 00:17:52.018 [2024-10-08 17:33:43.871924] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:52.018 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:52.018 { 00:17:52.018 "nqn": "nqn.2016-06.io.spdk:cnode27969", 00:17:52.018 "tgt_name": "foobar", 00:17:52.018 "method": "nvmf_create_subsystem", 00:17:52.018 "req_id": 1 00:17:52.018 } 00:17:52.018 Got JSON-RPC error response 00:17:52.018 response: 00:17:52.018 { 00:17:52.018 "code": -32603, 00:17:52.018 "message": "Unable to find target foobar" 00:17:52.018 }' 00:17:52.018 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:52.018 { 00:17:52.018 "nqn": "nqn.2016-06.io.spdk:cnode27969", 00:17:52.018 "tgt_name": "foobar", 00:17:52.018 "method": "nvmf_create_subsystem", 00:17:52.018 "req_id": 1 00:17:52.018 } 00:17:52.018 Got JSON-RPC error response 00:17:52.018 response: 00:17:52.018 { 00:17:52.018 "code": -32603, 00:17:52.018 "message": "Unable to find target foobar" 00:17:52.018 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:52.018 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:52.018 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7884 00:17:52.287 [2024-10-08 17:33:44.080782] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7884: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:52.287 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:52.287 { 00:17:52.287 "nqn": "nqn.2016-06.io.spdk:cnode7884", 00:17:52.287 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:52.287 "method": "nvmf_create_subsystem", 00:17:52.287 "req_id": 1 00:17:52.287 } 00:17:52.287 Got JSON-RPC error response 00:17:52.287 response: 00:17:52.287 { 00:17:52.287 "code": -32602, 00:17:52.287 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:52.287 }' 00:17:52.287 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:52.287 { 00:17:52.287 "nqn": "nqn.2016-06.io.spdk:cnode7884", 00:17:52.287 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:52.287 "method": "nvmf_create_subsystem", 00:17:52.287 "req_id": 1 00:17:52.287 } 00:17:52.287 Got JSON-RPC error response 00:17:52.287 response: 00:17:52.287 { 00:17:52.287 "code": -32602, 00:17:52.287 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:52.287 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:52.287 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:52.287 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14172 00:17:52.555 [2024-10-08 17:33:44.285520] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14172: invalid model number 'SPDK_Controller' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:52.555 { 00:17:52.555 "nqn": "nqn.2016-06.io.spdk:cnode14172", 00:17:52.555 "model_number": "SPDK_Controller\u001f", 00:17:52.555 "method": "nvmf_create_subsystem", 00:17:52.555 "req_id": 1 00:17:52.555 } 00:17:52.555 Got JSON-RPC error response 00:17:52.555 response: 00:17:52.555 { 00:17:52.555 "code": -32602, 00:17:52.555 "message": "Invalid MN SPDK_Controller\u001f" 00:17:52.555 }' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:52.555 { 00:17:52.555 "nqn": "nqn.2016-06.io.spdk:cnode14172", 00:17:52.555 "model_number": "SPDK_Controller\u001f", 00:17:52.555 "method": "nvmf_create_subsystem", 00:17:52.555 "req_id": 1 00:17:52.555 } 00:17:52.555 Got JSON-RPC error response 00:17:52.555 response: 00:17:52.555 { 00:17:52.555 "code": -32602, 00:17:52.555 "message": "Invalid MN SPDK_Controller\u001f" 00:17:52.555 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.555 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~,0+X<,ZzP#fJnSsvRd)f' 00:17:52.556 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '~,0+X<,ZzP#fJnSsvRd)f' nqn.2016-06.io.spdk:cnode16905 00:17:52.819 [2024-10-08 17:33:44.658941] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16905: invalid serial number '~,0+X<,ZzP#fJnSsvRd)f' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:52.819 { 00:17:52.819 "nqn": "nqn.2016-06.io.spdk:cnode16905", 00:17:52.819 "serial_number": "~,0+X<,ZzP#fJnSsvRd)f", 00:17:52.819 "method": "nvmf_create_subsystem", 00:17:52.819 "req_id": 1 00:17:52.819 } 00:17:52.819 Got JSON-RPC error response 00:17:52.819 response: 00:17:52.819 { 00:17:52.819 "code": -32602, 00:17:52.819 "message": "Invalid SN ~,0+X<,ZzP#fJnSsvRd)f" 00:17:52.819 }' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:52.819 { 00:17:52.819 "nqn": "nqn.2016-06.io.spdk:cnode16905", 00:17:52.819 "serial_number": "~,0+X<,ZzP#fJnSsvRd)f", 00:17:52.819 "method": "nvmf_create_subsystem", 00:17:52.819 "req_id": 1 00:17:52.819 } 00:17:52.819 Got JSON-RPC error response 00:17:52.819 response: 00:17:52.819 { 00:17:52.819 "code": -32602, 00:17:52.819 "message": "Invalid SN ~,0+X<,ZzP#fJnSsvRd)f" 00:17:52.819 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:52.819 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.820 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.083 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ')7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$' 00:17:53.084 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ')7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$' nqn.2016-06.io.spdk:cnode26784 00:17:53.346 [2024-10-08 17:33:45.140799] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26784: invalid model number ')7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$' 00:17:53.346 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:53.346 { 00:17:53.346 "nqn": "nqn.2016-06.io.spdk:cnode26784", 00:17:53.346 "model_number": ")7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$", 00:17:53.346 "method": "nvmf_create_subsystem", 00:17:53.346 "req_id": 1 00:17:53.346 } 00:17:53.346 Got JSON-RPC error response 00:17:53.346 response: 00:17:53.346 { 00:17:53.346 "code": -32602, 00:17:53.346 "message": "Invalid MN )7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$" 00:17:53.346 }' 00:17:53.346 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:53.346 { 00:17:53.346 "nqn": "nqn.2016-06.io.spdk:cnode26784", 00:17:53.346 "model_number": ")7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$", 00:17:53.346 "method": "nvmf_create_subsystem", 00:17:53.346 "req_id": 1 00:17:53.346 } 00:17:53.346 Got JSON-RPC error response 00:17:53.346 response: 00:17:53.346 { 00:17:53.346 "code": -32602, 00:17:53.346 "message": "Invalid MN )7[~..6r`yW)0KX~Ad-B?*AuU|4gf9&_T:qMkB<2$" 00:17:53.346 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:53.346 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:53.346 [2024-10-08 17:33:45.337577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:53.608 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:53.868 [2024-10-08 17:33:45.718763] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:53.868 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:53.868 { 00:17:53.868 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.868 "listen_address": { 00:17:53.869 "trtype": "tcp", 00:17:53.869 "traddr": "", 00:17:53.869 "trsvcid": "4421" 00:17:53.869 }, 00:17:53.869 "method": "nvmf_subsystem_remove_listener", 00:17:53.869 "req_id": 1 00:17:53.869 } 00:17:53.869 Got JSON-RPC error response 00:17:53.869 response: 00:17:53.869 { 00:17:53.869 "code": -32602, 00:17:53.869 "message": "Invalid parameters" 00:17:53.869 }' 00:17:53.869 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:53.869 { 00:17:53.869 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.869 "listen_address": { 00:17:53.869 "trtype": "tcp", 00:17:53.869 "traddr": "", 00:17:53.869 "trsvcid": "4421" 00:17:53.869 }, 00:17:53.869 "method": "nvmf_subsystem_remove_listener", 00:17:53.869 "req_id": 1 00:17:53.869 } 00:17:53.869 Got JSON-RPC error response 00:17:53.869 response: 00:17:53.869 { 00:17:53.869 "code": -32602, 00:17:53.869 "message": "Invalid parameters" 00:17:53.869 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:53.869 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9897 -i 0 00:17:54.129 [2024-10-08 17:33:45.907328] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9897: invalid cntlid range [0-65519] 00:17:54.130 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:54.130 { 00:17:54.130 "nqn": "nqn.2016-06.io.spdk:cnode9897", 00:17:54.130 "min_cntlid": 0, 00:17:54.130 "method": "nvmf_create_subsystem", 00:17:54.130 "req_id": 1 00:17:54.130 } 00:17:54.130 Got JSON-RPC error response 00:17:54.130 response: 00:17:54.130 { 00:17:54.130 "code": -32602, 00:17:54.130 "message": "Invalid cntlid range [0-65519]" 00:17:54.130 }' 00:17:54.130 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:54.130 { 00:17:54.130 "nqn": "nqn.2016-06.io.spdk:cnode9897", 00:17:54.130 "min_cntlid": 0, 00:17:54.130 "method": "nvmf_create_subsystem", 00:17:54.130 "req_id": 1 00:17:54.130 } 00:17:54.130 Got JSON-RPC error response 00:17:54.130 response: 00:17:54.130 { 00:17:54.130 "code": -32602, 00:17:54.130 "message": "Invalid cntlid range [0-65519]" 00:17:54.130 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.130 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29616 -i 65520 00:17:54.130 [2024-10-08 17:33:46.091968] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29616: invalid cntlid range [65520-65519] 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:54.391 { 00:17:54.391 "nqn": "nqn.2016-06.io.spdk:cnode29616", 00:17:54.391 "min_cntlid": 65520, 00:17:54.391 "method": "nvmf_create_subsystem", 00:17:54.391 "req_id": 1 00:17:54.391 } 00:17:54.391 Got JSON-RPC error response 00:17:54.391 response: 00:17:54.391 { 00:17:54.391 "code": -32602, 00:17:54.391 "message": "Invalid cntlid range [65520-65519]" 00:17:54.391 }' 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:54.391 { 00:17:54.391 "nqn": "nqn.2016-06.io.spdk:cnode29616", 00:17:54.391 "min_cntlid": 65520, 00:17:54.391 "method": "nvmf_create_subsystem", 00:17:54.391 "req_id": 1 00:17:54.391 } 00:17:54.391 Got JSON-RPC error response 00:17:54.391 response: 00:17:54.391 { 00:17:54.391 "code": -32602, 00:17:54.391 "message": "Invalid cntlid range [65520-65519]" 00:17:54.391 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11500 -I 0 00:17:54.391 [2024-10-08 17:33:46.276525] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11500: invalid cntlid range [1-0] 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:54.391 { 00:17:54.391 "nqn": "nqn.2016-06.io.spdk:cnode11500", 00:17:54.391 "max_cntlid": 0, 00:17:54.391 "method": "nvmf_create_subsystem", 00:17:54.391 "req_id": 1 00:17:54.391 } 00:17:54.391 Got JSON-RPC error response 00:17:54.391 response: 00:17:54.391 { 00:17:54.391 "code": -32602, 00:17:54.391 "message": "Invalid cntlid range [1-0]" 00:17:54.391 }' 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:54.391 { 00:17:54.391 "nqn": "nqn.2016-06.io.spdk:cnode11500", 00:17:54.391 "max_cntlid": 0, 00:17:54.391 "method": "nvmf_create_subsystem", 00:17:54.391 "req_id": 1 00:17:54.391 } 00:17:54.391 Got JSON-RPC error response 00:17:54.391 response: 00:17:54.391 { 00:17:54.391 "code": -32602, 00:17:54.391 "message": "Invalid cntlid range [1-0]" 00:17:54.391 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.391 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11255 -I 65520 00:17:54.652 [2024-10-08 17:33:46.465144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11255: invalid cntlid range [1-65520] 00:17:54.652 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:54.652 { 00:17:54.652 "nqn": "nqn.2016-06.io.spdk:cnode11255", 00:17:54.652 "max_cntlid": 65520, 00:17:54.652 "method": "nvmf_create_subsystem", 00:17:54.652 "req_id": 1 00:17:54.652 } 00:17:54.652 Got JSON-RPC error response 00:17:54.652 response: 00:17:54.652 { 00:17:54.652 "code": -32602, 00:17:54.652 "message": "Invalid cntlid range [1-65520]" 00:17:54.652 }' 00:17:54.652 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:54.652 { 00:17:54.652 "nqn": "nqn.2016-06.io.spdk:cnode11255", 00:17:54.652 "max_cntlid": 65520, 00:17:54.652 "method": "nvmf_create_subsystem", 00:17:54.652 "req_id": 1 00:17:54.652 } 00:17:54.652 Got JSON-RPC error response 00:17:54.652 response: 00:17:54.652 { 00:17:54.652 "code": -32602, 00:17:54.652 "message": "Invalid cntlid range [1-65520]" 00:17:54.652 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.652 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3416 -i 6 -I 5 00:17:54.913 [2024-10-08 17:33:46.649780] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3416: invalid cntlid range [6-5] 00:17:54.913 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:54.913 { 00:17:54.913 "nqn": "nqn.2016-06.io.spdk:cnode3416", 00:17:54.913 "min_cntlid": 6, 00:17:54.913 "max_cntlid": 5, 00:17:54.913 "method": "nvmf_create_subsystem", 00:17:54.913 "req_id": 1 00:17:54.913 } 00:17:54.913 Got JSON-RPC error response 00:17:54.913 response: 00:17:54.913 { 00:17:54.913 "code": -32602, 00:17:54.913 "message": "Invalid cntlid range [6-5]" 00:17:54.913 }' 00:17:54.913 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:54.913 { 00:17:54.913 "nqn": "nqn.2016-06.io.spdk:cnode3416", 00:17:54.913 "min_cntlid": 6, 00:17:54.913 "max_cntlid": 5, 00:17:54.913 "method": "nvmf_create_subsystem", 00:17:54.913 "req_id": 1 00:17:54.913 } 00:17:54.913 Got JSON-RPC error response 00:17:54.913 response: 00:17:54.913 { 00:17:54.913 "code": -32602, 00:17:54.913 "message": "Invalid cntlid range [6-5]" 00:17:54.913 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.913 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:54.913 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:54.913 { 00:17:54.913 "name": "foobar", 00:17:54.913 "method": "nvmf_delete_target", 00:17:54.913 "req_id": 1 00:17:54.913 } 00:17:54.913 Got JSON-RPC error response 00:17:54.913 response: 00:17:54.913 { 00:17:54.913 "code": -32602, 00:17:54.913 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:54.913 }' 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:54.914 { 00:17:54.914 "name": "foobar", 00:17:54.914 "method": "nvmf_delete_target", 00:17:54.914 "req_id": 1 00:17:54.914 } 00:17:54.914 Got JSON-RPC error response 00:17:54.914 response: 00:17:54.914 { 00:17:54.914 "code": -32602, 00:17:54.914 "message": "The specified target doesn't exist, cannot delete it." 00:17:54.914 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.914 rmmod nvme_tcp 00:17:54.914 rmmod nvme_fabrics 00:17:54.914 rmmod nvme_keyring 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 296884 ']' 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 296884 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 296884 ']' 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 296884 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.914 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 296884 00:17:55.176 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.176 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.176 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 296884' 00:17:55.176 killing process with pid 296884 00:17:55.176 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 296884 00:17:55.176 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 296884 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.176 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.724 00:17:57.724 real 0m14.290s 00:17:57.724 user 0m20.829s 00:17:57.724 sys 0m6.723s 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 ************************************ 00:17:57.724 END TEST nvmf_invalid 00:17:57.724 ************************************ 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 ************************************ 00:17:57.724 START TEST nvmf_connect_stress 00:17:57.724 ************************************ 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:57.724 * Looking for test storage... 00:17:57.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.724 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:57.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.725 --rc genhtml_branch_coverage=1 00:17:57.725 --rc genhtml_function_coverage=1 00:17:57.725 --rc genhtml_legend=1 00:17:57.725 --rc geninfo_all_blocks=1 00:17:57.725 --rc geninfo_unexecuted_blocks=1 00:17:57.725 00:17:57.725 ' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:57.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.725 --rc genhtml_branch_coverage=1 00:17:57.725 --rc genhtml_function_coverage=1 00:17:57.725 --rc genhtml_legend=1 00:17:57.725 --rc geninfo_all_blocks=1 00:17:57.725 --rc geninfo_unexecuted_blocks=1 00:17:57.725 00:17:57.725 ' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:57.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.725 --rc genhtml_branch_coverage=1 00:17:57.725 --rc genhtml_function_coverage=1 00:17:57.725 --rc genhtml_legend=1 00:17:57.725 --rc geninfo_all_blocks=1 00:17:57.725 --rc geninfo_unexecuted_blocks=1 00:17:57.725 00:17:57.725 ' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:57.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.725 --rc genhtml_branch_coverage=1 00:17:57.725 --rc genhtml_function_coverage=1 00:17:57.725 --rc genhtml_legend=1 00:17:57.725 --rc geninfo_all_blocks=1 00:17:57.725 --rc geninfo_unexecuted_blocks=1 00:17:57.725 00:17:57.725 ' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.725 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.726 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:57.726 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:57.726 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.726 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.882 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.882 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.882 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:05.883 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:05.883 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:05.883 Found net devices under 0000:31:00.0: cvl_0_0 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:05.883 Found net devices under 0000:31:00.1: cvl_0_1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:18:05.883 00:18:05.883 --- 10.0.0.2 ping statistics --- 00:18:05.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.883 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:18:05.883 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:18:05.883 00:18:05.883 --- 10.0.0.1 ping statistics --- 00:18:05.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.883 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:05.883 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=302126 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 302126 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 302126 ']' 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.884 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.884 [2024-10-08 17:33:57.125742] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:18:05.884 [2024-10-08 17:33:57.125806] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.884 [2024-10-08 17:33:57.217775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:05.884 [2024-10-08 17:33:57.309863] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.884 [2024-10-08 17:33:57.309927] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.884 [2024-10-08 17:33:57.309935] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.884 [2024-10-08 17:33:57.309943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.884 [2024-10-08 17:33:57.309949] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.884 [2024-10-08 17:33:57.311307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.884 [2024-10-08 17:33:57.311467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.884 [2024-10-08 17:33:57.311467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.147 17:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.147 [2024-10-08 17:33:58.049473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.147 [2024-10-08 17:33:58.091631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.147 NULL1 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=302476 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.147 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.409 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.670 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:06.670 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.670 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.670 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.932 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.932 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:06.932 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.932 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.932 17:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.504 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.504 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:07.504 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.504 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.504 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.766 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.766 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:07.766 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.766 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.766 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.028 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.028 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:08.028 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.029 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.029 17:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.292 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.292 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:08.292 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.292 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.292 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.552 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.552 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:08.552 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.552 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.552 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.124 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.124 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:09.124 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.124 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.124 17:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.384 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.384 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:09.384 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.384 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.384 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.644 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:09.644 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.644 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.644 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.905 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.905 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:09.905 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.905 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.905 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.166 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.166 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:10.166 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.166 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.166 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.738 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.738 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:10.738 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.738 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.738 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.999 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.999 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:10.999 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.999 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.999 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.260 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.260 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:11.260 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.260 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.260 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.521 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.521 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:11.521 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.521 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.521 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.782 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.782 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:11.782 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.782 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.782 17:34:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.354 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.354 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:12.354 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.354 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.354 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.616 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.616 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:12.616 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.616 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.616 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.877 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.877 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:12.877 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.877 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.877 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.137 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.137 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:13.137 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.137 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.137 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.398 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.398 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:13.398 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.398 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.398 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.969 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.969 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:13.969 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.970 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.970 17:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.233 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.233 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:14.233 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.233 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.233 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.493 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.493 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:14.493 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.493 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.493 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.754 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.754 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:14.754 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.754 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.754 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.015 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.015 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:15.015 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.015 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.015 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.586 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.586 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:15.586 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.586 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.587 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.847 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.847 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:15.847 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.847 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.847 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.108 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.108 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:16.108 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.108 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.108 17:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.369 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 302476 00:18:16.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (302476) - No such process 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 302476 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.369 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.369 rmmod nvme_tcp 00:18:16.369 rmmod nvme_fabrics 00:18:16.369 rmmod nvme_keyring 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 302126 ']' 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 302126 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 302126 ']' 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 302126 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302126 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302126' 00:18:16.629 killing process with pid 302126 00:18:16.629 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 302126 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 302126 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.630 17:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:19.175 00:18:19.175 real 0m21.412s 00:18:19.175 user 0m43.933s 00:18:19.175 sys 0m7.936s 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.175 ************************************ 00:18:19.175 END TEST nvmf_connect_stress 00:18:19.175 ************************************ 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.175 ************************************ 00:18:19.175 START TEST nvmf_fused_ordering 00:18:19.175 ************************************ 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:19.175 * Looking for test storage... 00:18:19.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:19.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.175 --rc genhtml_branch_coverage=1 00:18:19.175 --rc genhtml_function_coverage=1 00:18:19.175 --rc genhtml_legend=1 00:18:19.175 --rc geninfo_all_blocks=1 00:18:19.175 --rc geninfo_unexecuted_blocks=1 00:18:19.175 00:18:19.175 ' 00:18:19.175 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:19.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.175 --rc genhtml_branch_coverage=1 00:18:19.175 --rc genhtml_function_coverage=1 00:18:19.175 --rc genhtml_legend=1 00:18:19.175 --rc geninfo_all_blocks=1 00:18:19.175 --rc geninfo_unexecuted_blocks=1 00:18:19.175 00:18:19.175 ' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.176 --rc genhtml_branch_coverage=1 00:18:19.176 --rc genhtml_function_coverage=1 00:18:19.176 --rc genhtml_legend=1 00:18:19.176 --rc geninfo_all_blocks=1 00:18:19.176 --rc geninfo_unexecuted_blocks=1 00:18:19.176 00:18:19.176 ' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:19.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.176 --rc genhtml_branch_coverage=1 00:18:19.176 --rc genhtml_function_coverage=1 00:18:19.176 --rc genhtml_legend=1 00:18:19.176 --rc geninfo_all_blocks=1 00:18:19.176 --rc geninfo_unexecuted_blocks=1 00:18:19.176 00:18:19.176 ' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:19.176 17:34:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:27.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:27.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:27.324 Found net devices under 0000:31:00.0: cvl_0_0 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:27.324 Found net devices under 0000:31:00.1: cvl_0_1 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.324 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:27.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:18:27.325 00:18:27.325 --- 10.0.0.2 ping statistics --- 00:18:27.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.325 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:27.325 00:18:27.325 --- 10.0.0.1 ping statistics --- 00:18:27.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.325 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=308722 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 308722 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 308722 ']' 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.325 17:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.325 [2024-10-08 17:34:18.725406] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:18:27.325 [2024-10-08 17:34:18.725474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.325 [2024-10-08 17:34:18.813412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.325 [2024-10-08 17:34:18.906765] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.325 [2024-10-08 17:34:18.906821] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.325 [2024-10-08 17:34:18.906830] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.325 [2024-10-08 17:34:18.906837] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.325 [2024-10-08 17:34:18.906843] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.325 [2024-10-08 17:34:18.907601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.587 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.587 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:27.587 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:27.587 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.587 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 [2024-10-08 17:34:19.587963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 [2024-10-08 17:34:19.612205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 NULL1 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:27.848 [2024-10-08 17:34:19.682440] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:18:27.848 [2024-10-08 17:34:19.682483] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308924 ] 00:18:28.421 Attached to nqn.2016-06.io.spdk:cnode1 00:18:28.421 Namespace ID: 1 size: 1GB 00:18:28.421 fused_ordering(0) 00:18:28.421 fused_ordering(1) 00:18:28.421 fused_ordering(2) 00:18:28.421 fused_ordering(3) 00:18:28.421 fused_ordering(4) 00:18:28.421 fused_ordering(5) 00:18:28.421 fused_ordering(6) 00:18:28.421 fused_ordering(7) 00:18:28.421 fused_ordering(8) 00:18:28.421 fused_ordering(9) 00:18:28.421 fused_ordering(10) 00:18:28.421 fused_ordering(11) 00:18:28.421 fused_ordering(12) 00:18:28.421 fused_ordering(13) 00:18:28.421 fused_ordering(14) 00:18:28.421 fused_ordering(15) 00:18:28.421 fused_ordering(16) 00:18:28.421 fused_ordering(17) 00:18:28.421 fused_ordering(18) 00:18:28.421 fused_ordering(19) 00:18:28.421 fused_ordering(20) 00:18:28.421 fused_ordering(21) 00:18:28.421 fused_ordering(22) 00:18:28.421 fused_ordering(23) 00:18:28.421 fused_ordering(24) 00:18:28.421 fused_ordering(25) 00:18:28.421 fused_ordering(26) 00:18:28.421 fused_ordering(27) 00:18:28.421 fused_ordering(28) 00:18:28.421 fused_ordering(29) 00:18:28.421 fused_ordering(30) 00:18:28.421 fused_ordering(31) 00:18:28.421 fused_ordering(32) 00:18:28.421 fused_ordering(33) 00:18:28.421 fused_ordering(34) 00:18:28.421 fused_ordering(35) 00:18:28.421 fused_ordering(36) 00:18:28.421 fused_ordering(37) 00:18:28.421 fused_ordering(38) 00:18:28.421 fused_ordering(39) 00:18:28.421 fused_ordering(40) 00:18:28.421 fused_ordering(41) 00:18:28.421 fused_ordering(42) 00:18:28.421 fused_ordering(43) 00:18:28.421 fused_ordering(44) 00:18:28.421 fused_ordering(45) 00:18:28.421 fused_ordering(46) 00:18:28.421 fused_ordering(47) 00:18:28.421 fused_ordering(48) 00:18:28.421 fused_ordering(49) 00:18:28.421 fused_ordering(50) 00:18:28.421 fused_ordering(51) 00:18:28.421 fused_ordering(52) 00:18:28.421 fused_ordering(53) 00:18:28.421 fused_ordering(54) 00:18:28.421 fused_ordering(55) 00:18:28.421 fused_ordering(56) 00:18:28.421 fused_ordering(57) 00:18:28.421 fused_ordering(58) 00:18:28.421 fused_ordering(59) 00:18:28.421 fused_ordering(60) 00:18:28.421 fused_ordering(61) 00:18:28.421 fused_ordering(62) 00:18:28.421 fused_ordering(63) 00:18:28.421 fused_ordering(64) 00:18:28.421 fused_ordering(65) 00:18:28.421 fused_ordering(66) 00:18:28.421 fused_ordering(67) 00:18:28.421 fused_ordering(68) 00:18:28.421 fused_ordering(69) 00:18:28.421 fused_ordering(70) 00:18:28.421 fused_ordering(71) 00:18:28.421 fused_ordering(72) 00:18:28.421 fused_ordering(73) 00:18:28.421 fused_ordering(74) 00:18:28.421 fused_ordering(75) 00:18:28.421 fused_ordering(76) 00:18:28.421 fused_ordering(77) 00:18:28.421 fused_ordering(78) 00:18:28.421 fused_ordering(79) 00:18:28.421 fused_ordering(80) 00:18:28.421 fused_ordering(81) 00:18:28.421 fused_ordering(82) 00:18:28.421 fused_ordering(83) 00:18:28.421 fused_ordering(84) 00:18:28.421 fused_ordering(85) 00:18:28.421 fused_ordering(86) 00:18:28.421 fused_ordering(87) 00:18:28.421 fused_ordering(88) 00:18:28.421 fused_ordering(89) 00:18:28.421 fused_ordering(90) 00:18:28.421 fused_ordering(91) 00:18:28.421 fused_ordering(92) 00:18:28.421 fused_ordering(93) 00:18:28.421 fused_ordering(94) 00:18:28.421 fused_ordering(95) 00:18:28.421 fused_ordering(96) 00:18:28.421 fused_ordering(97) 00:18:28.421 fused_ordering(98) 00:18:28.421 fused_ordering(99) 00:18:28.421 fused_ordering(100) 00:18:28.421 fused_ordering(101) 00:18:28.421 fused_ordering(102) 00:18:28.421 fused_ordering(103) 00:18:28.421 fused_ordering(104) 00:18:28.421 fused_ordering(105) 00:18:28.421 fused_ordering(106) 00:18:28.421 fused_ordering(107) 00:18:28.421 fused_ordering(108) 00:18:28.421 fused_ordering(109) 00:18:28.421 fused_ordering(110) 00:18:28.421 fused_ordering(111) 00:18:28.421 fused_ordering(112) 00:18:28.421 fused_ordering(113) 00:18:28.421 fused_ordering(114) 00:18:28.421 fused_ordering(115) 00:18:28.421 fused_ordering(116) 00:18:28.421 fused_ordering(117) 00:18:28.421 fused_ordering(118) 00:18:28.421 fused_ordering(119) 00:18:28.421 fused_ordering(120) 00:18:28.421 fused_ordering(121) 00:18:28.421 fused_ordering(122) 00:18:28.421 fused_ordering(123) 00:18:28.421 fused_ordering(124) 00:18:28.421 fused_ordering(125) 00:18:28.421 fused_ordering(126) 00:18:28.421 fused_ordering(127) 00:18:28.421 fused_ordering(128) 00:18:28.421 fused_ordering(129) 00:18:28.421 fused_ordering(130) 00:18:28.421 fused_ordering(131) 00:18:28.421 fused_ordering(132) 00:18:28.421 fused_ordering(133) 00:18:28.421 fused_ordering(134) 00:18:28.421 fused_ordering(135) 00:18:28.421 fused_ordering(136) 00:18:28.421 fused_ordering(137) 00:18:28.421 fused_ordering(138) 00:18:28.421 fused_ordering(139) 00:18:28.421 fused_ordering(140) 00:18:28.421 fused_ordering(141) 00:18:28.421 fused_ordering(142) 00:18:28.421 fused_ordering(143) 00:18:28.421 fused_ordering(144) 00:18:28.421 fused_ordering(145) 00:18:28.421 fused_ordering(146) 00:18:28.421 fused_ordering(147) 00:18:28.421 fused_ordering(148) 00:18:28.421 fused_ordering(149) 00:18:28.421 fused_ordering(150) 00:18:28.421 fused_ordering(151) 00:18:28.421 fused_ordering(152) 00:18:28.421 fused_ordering(153) 00:18:28.421 fused_ordering(154) 00:18:28.421 fused_ordering(155) 00:18:28.421 fused_ordering(156) 00:18:28.421 fused_ordering(157) 00:18:28.421 fused_ordering(158) 00:18:28.421 fused_ordering(159) 00:18:28.421 fused_ordering(160) 00:18:28.421 fused_ordering(161) 00:18:28.421 fused_ordering(162) 00:18:28.421 fused_ordering(163) 00:18:28.421 fused_ordering(164) 00:18:28.421 fused_ordering(165) 00:18:28.421 fused_ordering(166) 00:18:28.421 fused_ordering(167) 00:18:28.421 fused_ordering(168) 00:18:28.421 fused_ordering(169) 00:18:28.421 fused_ordering(170) 00:18:28.421 fused_ordering(171) 00:18:28.421 fused_ordering(172) 00:18:28.421 fused_ordering(173) 00:18:28.421 fused_ordering(174) 00:18:28.421 fused_ordering(175) 00:18:28.421 fused_ordering(176) 00:18:28.421 fused_ordering(177) 00:18:28.421 fused_ordering(178) 00:18:28.421 fused_ordering(179) 00:18:28.421 fused_ordering(180) 00:18:28.421 fused_ordering(181) 00:18:28.421 fused_ordering(182) 00:18:28.421 fused_ordering(183) 00:18:28.421 fused_ordering(184) 00:18:28.421 fused_ordering(185) 00:18:28.421 fused_ordering(186) 00:18:28.421 fused_ordering(187) 00:18:28.421 fused_ordering(188) 00:18:28.421 fused_ordering(189) 00:18:28.421 fused_ordering(190) 00:18:28.421 fused_ordering(191) 00:18:28.421 fused_ordering(192) 00:18:28.421 fused_ordering(193) 00:18:28.421 fused_ordering(194) 00:18:28.421 fused_ordering(195) 00:18:28.421 fused_ordering(196) 00:18:28.421 fused_ordering(197) 00:18:28.421 fused_ordering(198) 00:18:28.421 fused_ordering(199) 00:18:28.421 fused_ordering(200) 00:18:28.421 fused_ordering(201) 00:18:28.421 fused_ordering(202) 00:18:28.421 fused_ordering(203) 00:18:28.421 fused_ordering(204) 00:18:28.421 fused_ordering(205) 00:18:28.995 fused_ordering(206) 00:18:28.995 fused_ordering(207) 00:18:28.995 fused_ordering(208) 00:18:28.995 fused_ordering(209) 00:18:28.995 fused_ordering(210) 00:18:28.995 fused_ordering(211) 00:18:28.995 fused_ordering(212) 00:18:28.995 fused_ordering(213) 00:18:28.995 fused_ordering(214) 00:18:28.995 fused_ordering(215) 00:18:28.995 fused_ordering(216) 00:18:28.995 fused_ordering(217) 00:18:28.995 fused_ordering(218) 00:18:28.995 fused_ordering(219) 00:18:28.995 fused_ordering(220) 00:18:28.995 fused_ordering(221) 00:18:28.995 fused_ordering(222) 00:18:28.995 fused_ordering(223) 00:18:28.995 fused_ordering(224) 00:18:28.995 fused_ordering(225) 00:18:28.995 fused_ordering(226) 00:18:28.995 fused_ordering(227) 00:18:28.995 fused_ordering(228) 00:18:28.995 fused_ordering(229) 00:18:28.995 fused_ordering(230) 00:18:28.995 fused_ordering(231) 00:18:28.995 fused_ordering(232) 00:18:28.995 fused_ordering(233) 00:18:28.995 fused_ordering(234) 00:18:28.995 fused_ordering(235) 00:18:28.995 fused_ordering(236) 00:18:28.995 fused_ordering(237) 00:18:28.995 fused_ordering(238) 00:18:28.995 fused_ordering(239) 00:18:28.995 fused_ordering(240) 00:18:28.995 fused_ordering(241) 00:18:28.995 fused_ordering(242) 00:18:28.995 fused_ordering(243) 00:18:28.995 fused_ordering(244) 00:18:28.995 fused_ordering(245) 00:18:28.995 fused_ordering(246) 00:18:28.995 fused_ordering(247) 00:18:28.995 fused_ordering(248) 00:18:28.995 fused_ordering(249) 00:18:28.995 fused_ordering(250) 00:18:28.995 fused_ordering(251) 00:18:28.995 fused_ordering(252) 00:18:28.995 fused_ordering(253) 00:18:28.995 fused_ordering(254) 00:18:28.995 fused_ordering(255) 00:18:28.995 fused_ordering(256) 00:18:28.995 fused_ordering(257) 00:18:28.995 fused_ordering(258) 00:18:28.995 fused_ordering(259) 00:18:28.995 fused_ordering(260) 00:18:28.995 fused_ordering(261) 00:18:28.995 fused_ordering(262) 00:18:28.995 fused_ordering(263) 00:18:28.995 fused_ordering(264) 00:18:28.995 fused_ordering(265) 00:18:28.995 fused_ordering(266) 00:18:28.995 fused_ordering(267) 00:18:28.995 fused_ordering(268) 00:18:28.995 fused_ordering(269) 00:18:28.995 fused_ordering(270) 00:18:28.995 fused_ordering(271) 00:18:28.995 fused_ordering(272) 00:18:28.995 fused_ordering(273) 00:18:28.995 fused_ordering(274) 00:18:28.995 fused_ordering(275) 00:18:28.995 fused_ordering(276) 00:18:28.995 fused_ordering(277) 00:18:28.995 fused_ordering(278) 00:18:28.995 fused_ordering(279) 00:18:28.995 fused_ordering(280) 00:18:28.995 fused_ordering(281) 00:18:28.995 fused_ordering(282) 00:18:28.995 fused_ordering(283) 00:18:28.995 fused_ordering(284) 00:18:28.995 fused_ordering(285) 00:18:28.995 fused_ordering(286) 00:18:28.995 fused_ordering(287) 00:18:28.995 fused_ordering(288) 00:18:28.995 fused_ordering(289) 00:18:28.995 fused_ordering(290) 00:18:28.995 fused_ordering(291) 00:18:28.995 fused_ordering(292) 00:18:28.995 fused_ordering(293) 00:18:28.995 fused_ordering(294) 00:18:28.995 fused_ordering(295) 00:18:28.995 fused_ordering(296) 00:18:28.995 fused_ordering(297) 00:18:28.995 fused_ordering(298) 00:18:28.995 fused_ordering(299) 00:18:28.995 fused_ordering(300) 00:18:28.995 fused_ordering(301) 00:18:28.995 fused_ordering(302) 00:18:28.995 fused_ordering(303) 00:18:28.995 fused_ordering(304) 00:18:28.995 fused_ordering(305) 00:18:28.995 fused_ordering(306) 00:18:28.995 fused_ordering(307) 00:18:28.995 fused_ordering(308) 00:18:28.995 fused_ordering(309) 00:18:28.995 fused_ordering(310) 00:18:28.995 fused_ordering(311) 00:18:28.995 fused_ordering(312) 00:18:28.995 fused_ordering(313) 00:18:28.995 fused_ordering(314) 00:18:28.995 fused_ordering(315) 00:18:28.995 fused_ordering(316) 00:18:28.995 fused_ordering(317) 00:18:28.995 fused_ordering(318) 00:18:28.995 fused_ordering(319) 00:18:28.995 fused_ordering(320) 00:18:28.995 fused_ordering(321) 00:18:28.995 fused_ordering(322) 00:18:28.995 fused_ordering(323) 00:18:28.995 fused_ordering(324) 00:18:28.995 fused_ordering(325) 00:18:28.995 fused_ordering(326) 00:18:28.995 fused_ordering(327) 00:18:28.995 fused_ordering(328) 00:18:28.995 fused_ordering(329) 00:18:28.995 fused_ordering(330) 00:18:28.995 fused_ordering(331) 00:18:28.995 fused_ordering(332) 00:18:28.995 fused_ordering(333) 00:18:28.995 fused_ordering(334) 00:18:28.995 fused_ordering(335) 00:18:28.995 fused_ordering(336) 00:18:28.995 fused_ordering(337) 00:18:28.995 fused_ordering(338) 00:18:28.995 fused_ordering(339) 00:18:28.995 fused_ordering(340) 00:18:28.995 fused_ordering(341) 00:18:28.995 fused_ordering(342) 00:18:28.995 fused_ordering(343) 00:18:28.995 fused_ordering(344) 00:18:28.995 fused_ordering(345) 00:18:28.995 fused_ordering(346) 00:18:28.995 fused_ordering(347) 00:18:28.995 fused_ordering(348) 00:18:28.995 fused_ordering(349) 00:18:28.995 fused_ordering(350) 00:18:28.995 fused_ordering(351) 00:18:28.995 fused_ordering(352) 00:18:28.995 fused_ordering(353) 00:18:28.995 fused_ordering(354) 00:18:28.995 fused_ordering(355) 00:18:28.995 fused_ordering(356) 00:18:28.995 fused_ordering(357) 00:18:28.995 fused_ordering(358) 00:18:28.995 fused_ordering(359) 00:18:28.995 fused_ordering(360) 00:18:28.995 fused_ordering(361) 00:18:28.995 fused_ordering(362) 00:18:28.995 fused_ordering(363) 00:18:28.995 fused_ordering(364) 00:18:28.995 fused_ordering(365) 00:18:28.995 fused_ordering(366) 00:18:28.995 fused_ordering(367) 00:18:28.995 fused_ordering(368) 00:18:28.995 fused_ordering(369) 00:18:28.995 fused_ordering(370) 00:18:28.995 fused_ordering(371) 00:18:28.995 fused_ordering(372) 00:18:28.995 fused_ordering(373) 00:18:28.995 fused_ordering(374) 00:18:28.995 fused_ordering(375) 00:18:28.995 fused_ordering(376) 00:18:28.995 fused_ordering(377) 00:18:28.995 fused_ordering(378) 00:18:28.995 fused_ordering(379) 00:18:28.995 fused_ordering(380) 00:18:28.995 fused_ordering(381) 00:18:28.995 fused_ordering(382) 00:18:28.995 fused_ordering(383) 00:18:28.995 fused_ordering(384) 00:18:28.995 fused_ordering(385) 00:18:28.995 fused_ordering(386) 00:18:28.995 fused_ordering(387) 00:18:28.995 fused_ordering(388) 00:18:28.995 fused_ordering(389) 00:18:28.995 fused_ordering(390) 00:18:28.995 fused_ordering(391) 00:18:28.995 fused_ordering(392) 00:18:28.995 fused_ordering(393) 00:18:28.995 fused_ordering(394) 00:18:28.995 fused_ordering(395) 00:18:28.995 fused_ordering(396) 00:18:28.995 fused_ordering(397) 00:18:28.995 fused_ordering(398) 00:18:28.995 fused_ordering(399) 00:18:28.995 fused_ordering(400) 00:18:28.995 fused_ordering(401) 00:18:28.995 fused_ordering(402) 00:18:28.995 fused_ordering(403) 00:18:28.995 fused_ordering(404) 00:18:28.995 fused_ordering(405) 00:18:28.995 fused_ordering(406) 00:18:28.995 fused_ordering(407) 00:18:28.995 fused_ordering(408) 00:18:28.995 fused_ordering(409) 00:18:28.995 fused_ordering(410) 00:18:29.257 fused_ordering(411) 00:18:29.257 fused_ordering(412) 00:18:29.257 fused_ordering(413) 00:18:29.257 fused_ordering(414) 00:18:29.257 fused_ordering(415) 00:18:29.257 fused_ordering(416) 00:18:29.257 fused_ordering(417) 00:18:29.257 fused_ordering(418) 00:18:29.257 fused_ordering(419) 00:18:29.257 fused_ordering(420) 00:18:29.257 fused_ordering(421) 00:18:29.257 fused_ordering(422) 00:18:29.257 fused_ordering(423) 00:18:29.257 fused_ordering(424) 00:18:29.257 fused_ordering(425) 00:18:29.257 fused_ordering(426) 00:18:29.257 fused_ordering(427) 00:18:29.257 fused_ordering(428) 00:18:29.257 fused_ordering(429) 00:18:29.257 fused_ordering(430) 00:18:29.257 fused_ordering(431) 00:18:29.257 fused_ordering(432) 00:18:29.257 fused_ordering(433) 00:18:29.257 fused_ordering(434) 00:18:29.257 fused_ordering(435) 00:18:29.257 fused_ordering(436) 00:18:29.257 fused_ordering(437) 00:18:29.257 fused_ordering(438) 00:18:29.257 fused_ordering(439) 00:18:29.257 fused_ordering(440) 00:18:29.257 fused_ordering(441) 00:18:29.257 fused_ordering(442) 00:18:29.257 fused_ordering(443) 00:18:29.257 fused_ordering(444) 00:18:29.257 fused_ordering(445) 00:18:29.257 fused_ordering(446) 00:18:29.257 fused_ordering(447) 00:18:29.257 fused_ordering(448) 00:18:29.257 fused_ordering(449) 00:18:29.257 fused_ordering(450) 00:18:29.257 fused_ordering(451) 00:18:29.257 fused_ordering(452) 00:18:29.257 fused_ordering(453) 00:18:29.257 fused_ordering(454) 00:18:29.257 fused_ordering(455) 00:18:29.257 fused_ordering(456) 00:18:29.257 fused_ordering(457) 00:18:29.257 fused_ordering(458) 00:18:29.257 fused_ordering(459) 00:18:29.257 fused_ordering(460) 00:18:29.257 fused_ordering(461) 00:18:29.257 fused_ordering(462) 00:18:29.257 fused_ordering(463) 00:18:29.258 fused_ordering(464) 00:18:29.258 fused_ordering(465) 00:18:29.258 fused_ordering(466) 00:18:29.258 fused_ordering(467) 00:18:29.258 fused_ordering(468) 00:18:29.258 fused_ordering(469) 00:18:29.258 fused_ordering(470) 00:18:29.258 fused_ordering(471) 00:18:29.258 fused_ordering(472) 00:18:29.258 fused_ordering(473) 00:18:29.258 fused_ordering(474) 00:18:29.258 fused_ordering(475) 00:18:29.258 fused_ordering(476) 00:18:29.258 fused_ordering(477) 00:18:29.258 fused_ordering(478) 00:18:29.258 fused_ordering(479) 00:18:29.258 fused_ordering(480) 00:18:29.258 fused_ordering(481) 00:18:29.258 fused_ordering(482) 00:18:29.258 fused_ordering(483) 00:18:29.258 fused_ordering(484) 00:18:29.258 fused_ordering(485) 00:18:29.258 fused_ordering(486) 00:18:29.258 fused_ordering(487) 00:18:29.258 fused_ordering(488) 00:18:29.258 fused_ordering(489) 00:18:29.258 fused_ordering(490) 00:18:29.258 fused_ordering(491) 00:18:29.258 fused_ordering(492) 00:18:29.258 fused_ordering(493) 00:18:29.258 fused_ordering(494) 00:18:29.258 fused_ordering(495) 00:18:29.258 fused_ordering(496) 00:18:29.258 fused_ordering(497) 00:18:29.258 fused_ordering(498) 00:18:29.258 fused_ordering(499) 00:18:29.258 fused_ordering(500) 00:18:29.258 fused_ordering(501) 00:18:29.258 fused_ordering(502) 00:18:29.258 fused_ordering(503) 00:18:29.258 fused_ordering(504) 00:18:29.258 fused_ordering(505) 00:18:29.258 fused_ordering(506) 00:18:29.258 fused_ordering(507) 00:18:29.258 fused_ordering(508) 00:18:29.258 fused_ordering(509) 00:18:29.258 fused_ordering(510) 00:18:29.258 fused_ordering(511) 00:18:29.258 fused_ordering(512) 00:18:29.258 fused_ordering(513) 00:18:29.258 fused_ordering(514) 00:18:29.258 fused_ordering(515) 00:18:29.258 fused_ordering(516) 00:18:29.258 fused_ordering(517) 00:18:29.258 fused_ordering(518) 00:18:29.258 fused_ordering(519) 00:18:29.258 fused_ordering(520) 00:18:29.258 fused_ordering(521) 00:18:29.258 fused_ordering(522) 00:18:29.258 fused_ordering(523) 00:18:29.258 fused_ordering(524) 00:18:29.258 fused_ordering(525) 00:18:29.258 fused_ordering(526) 00:18:29.258 fused_ordering(527) 00:18:29.258 fused_ordering(528) 00:18:29.258 fused_ordering(529) 00:18:29.258 fused_ordering(530) 00:18:29.258 fused_ordering(531) 00:18:29.258 fused_ordering(532) 00:18:29.258 fused_ordering(533) 00:18:29.258 fused_ordering(534) 00:18:29.258 fused_ordering(535) 00:18:29.258 fused_ordering(536) 00:18:29.258 fused_ordering(537) 00:18:29.258 fused_ordering(538) 00:18:29.258 fused_ordering(539) 00:18:29.258 fused_ordering(540) 00:18:29.258 fused_ordering(541) 00:18:29.258 fused_ordering(542) 00:18:29.258 fused_ordering(543) 00:18:29.258 fused_ordering(544) 00:18:29.258 fused_ordering(545) 00:18:29.258 fused_ordering(546) 00:18:29.258 fused_ordering(547) 00:18:29.258 fused_ordering(548) 00:18:29.258 fused_ordering(549) 00:18:29.258 fused_ordering(550) 00:18:29.258 fused_ordering(551) 00:18:29.258 fused_ordering(552) 00:18:29.258 fused_ordering(553) 00:18:29.258 fused_ordering(554) 00:18:29.258 fused_ordering(555) 00:18:29.258 fused_ordering(556) 00:18:29.258 fused_ordering(557) 00:18:29.258 fused_ordering(558) 00:18:29.258 fused_ordering(559) 00:18:29.258 fused_ordering(560) 00:18:29.258 fused_ordering(561) 00:18:29.258 fused_ordering(562) 00:18:29.258 fused_ordering(563) 00:18:29.258 fused_ordering(564) 00:18:29.258 fused_ordering(565) 00:18:29.258 fused_ordering(566) 00:18:29.258 fused_ordering(567) 00:18:29.258 fused_ordering(568) 00:18:29.258 fused_ordering(569) 00:18:29.258 fused_ordering(570) 00:18:29.258 fused_ordering(571) 00:18:29.258 fused_ordering(572) 00:18:29.258 fused_ordering(573) 00:18:29.258 fused_ordering(574) 00:18:29.258 fused_ordering(575) 00:18:29.258 fused_ordering(576) 00:18:29.258 fused_ordering(577) 00:18:29.258 fused_ordering(578) 00:18:29.258 fused_ordering(579) 00:18:29.258 fused_ordering(580) 00:18:29.258 fused_ordering(581) 00:18:29.258 fused_ordering(582) 00:18:29.258 fused_ordering(583) 00:18:29.258 fused_ordering(584) 00:18:29.258 fused_ordering(585) 00:18:29.258 fused_ordering(586) 00:18:29.258 fused_ordering(587) 00:18:29.258 fused_ordering(588) 00:18:29.258 fused_ordering(589) 00:18:29.258 fused_ordering(590) 00:18:29.258 fused_ordering(591) 00:18:29.258 fused_ordering(592) 00:18:29.258 fused_ordering(593) 00:18:29.258 fused_ordering(594) 00:18:29.258 fused_ordering(595) 00:18:29.258 fused_ordering(596) 00:18:29.258 fused_ordering(597) 00:18:29.258 fused_ordering(598) 00:18:29.258 fused_ordering(599) 00:18:29.258 fused_ordering(600) 00:18:29.258 fused_ordering(601) 00:18:29.258 fused_ordering(602) 00:18:29.258 fused_ordering(603) 00:18:29.258 fused_ordering(604) 00:18:29.258 fused_ordering(605) 00:18:29.258 fused_ordering(606) 00:18:29.258 fused_ordering(607) 00:18:29.258 fused_ordering(608) 00:18:29.258 fused_ordering(609) 00:18:29.258 fused_ordering(610) 00:18:29.258 fused_ordering(611) 00:18:29.258 fused_ordering(612) 00:18:29.258 fused_ordering(613) 00:18:29.258 fused_ordering(614) 00:18:29.258 fused_ordering(615) 00:18:29.830 fused_ordering(616) 00:18:29.831 fused_ordering(617) 00:18:29.831 fused_ordering(618) 00:18:29.831 fused_ordering(619) 00:18:29.831 fused_ordering(620) 00:18:29.831 fused_ordering(621) 00:18:29.831 fused_ordering(622) 00:18:29.831 fused_ordering(623) 00:18:29.831 fused_ordering(624) 00:18:29.831 fused_ordering(625) 00:18:29.831 fused_ordering(626) 00:18:29.831 fused_ordering(627) 00:18:29.831 fused_ordering(628) 00:18:29.831 fused_ordering(629) 00:18:29.831 fused_ordering(630) 00:18:29.831 fused_ordering(631) 00:18:29.831 fused_ordering(632) 00:18:29.831 fused_ordering(633) 00:18:29.831 fused_ordering(634) 00:18:29.831 fused_ordering(635) 00:18:29.831 fused_ordering(636) 00:18:29.831 fused_ordering(637) 00:18:29.831 fused_ordering(638) 00:18:29.831 fused_ordering(639) 00:18:29.831 fused_ordering(640) 00:18:29.831 fused_ordering(641) 00:18:29.831 fused_ordering(642) 00:18:29.831 fused_ordering(643) 00:18:29.831 fused_ordering(644) 00:18:29.831 fused_ordering(645) 00:18:29.831 fused_ordering(646) 00:18:29.831 fused_ordering(647) 00:18:29.831 fused_ordering(648) 00:18:29.831 fused_ordering(649) 00:18:29.831 fused_ordering(650) 00:18:29.831 fused_ordering(651) 00:18:29.831 fused_ordering(652) 00:18:29.831 fused_ordering(653) 00:18:29.831 fused_ordering(654) 00:18:29.831 fused_ordering(655) 00:18:29.831 fused_ordering(656) 00:18:29.831 fused_ordering(657) 00:18:29.831 fused_ordering(658) 00:18:29.831 fused_ordering(659) 00:18:29.831 fused_ordering(660) 00:18:29.831 fused_ordering(661) 00:18:29.831 fused_ordering(662) 00:18:29.831 fused_ordering(663) 00:18:29.831 fused_ordering(664) 00:18:29.831 fused_ordering(665) 00:18:29.831 fused_ordering(666) 00:18:29.831 fused_ordering(667) 00:18:29.831 fused_ordering(668) 00:18:29.831 fused_ordering(669) 00:18:29.831 fused_ordering(670) 00:18:29.831 fused_ordering(671) 00:18:29.831 fused_ordering(672) 00:18:29.831 fused_ordering(673) 00:18:29.831 fused_ordering(674) 00:18:29.831 fused_ordering(675) 00:18:29.831 fused_ordering(676) 00:18:29.831 fused_ordering(677) 00:18:29.831 fused_ordering(678) 00:18:29.831 fused_ordering(679) 00:18:29.831 fused_ordering(680) 00:18:29.831 fused_ordering(681) 00:18:29.831 fused_ordering(682) 00:18:29.831 fused_ordering(683) 00:18:29.831 fused_ordering(684) 00:18:29.831 fused_ordering(685) 00:18:29.831 fused_ordering(686) 00:18:29.831 fused_ordering(687) 00:18:29.831 fused_ordering(688) 00:18:29.831 fused_ordering(689) 00:18:29.831 fused_ordering(690) 00:18:29.831 fused_ordering(691) 00:18:29.831 fused_ordering(692) 00:18:29.831 fused_ordering(693) 00:18:29.831 fused_ordering(694) 00:18:29.831 fused_ordering(695) 00:18:29.831 fused_ordering(696) 00:18:29.831 fused_ordering(697) 00:18:29.831 fused_ordering(698) 00:18:29.831 fused_ordering(699) 00:18:29.831 fused_ordering(700) 00:18:29.831 fused_ordering(701) 00:18:29.831 fused_ordering(702) 00:18:29.831 fused_ordering(703) 00:18:29.831 fused_ordering(704) 00:18:29.831 fused_ordering(705) 00:18:29.831 fused_ordering(706) 00:18:29.831 fused_ordering(707) 00:18:29.831 fused_ordering(708) 00:18:29.831 fused_ordering(709) 00:18:29.831 fused_ordering(710) 00:18:29.831 fused_ordering(711) 00:18:29.831 fused_ordering(712) 00:18:29.831 fused_ordering(713) 00:18:29.831 fused_ordering(714) 00:18:29.831 fused_ordering(715) 00:18:29.831 fused_ordering(716) 00:18:29.831 fused_ordering(717) 00:18:29.831 fused_ordering(718) 00:18:29.831 fused_ordering(719) 00:18:29.831 fused_ordering(720) 00:18:29.831 fused_ordering(721) 00:18:29.831 fused_ordering(722) 00:18:29.831 fused_ordering(723) 00:18:29.831 fused_ordering(724) 00:18:29.831 fused_ordering(725) 00:18:29.831 fused_ordering(726) 00:18:29.831 fused_ordering(727) 00:18:29.831 fused_ordering(728) 00:18:29.831 fused_ordering(729) 00:18:29.831 fused_ordering(730) 00:18:29.831 fused_ordering(731) 00:18:29.831 fused_ordering(732) 00:18:29.831 fused_ordering(733) 00:18:29.831 fused_ordering(734) 00:18:29.831 fused_ordering(735) 00:18:29.831 fused_ordering(736) 00:18:29.831 fused_ordering(737) 00:18:29.831 fused_ordering(738) 00:18:29.831 fused_ordering(739) 00:18:29.831 fused_ordering(740) 00:18:29.831 fused_ordering(741) 00:18:29.831 fused_ordering(742) 00:18:29.831 fused_ordering(743) 00:18:29.831 fused_ordering(744) 00:18:29.831 fused_ordering(745) 00:18:29.831 fused_ordering(746) 00:18:29.831 fused_ordering(747) 00:18:29.831 fused_ordering(748) 00:18:29.831 fused_ordering(749) 00:18:29.831 fused_ordering(750) 00:18:29.831 fused_ordering(751) 00:18:29.831 fused_ordering(752) 00:18:29.831 fused_ordering(753) 00:18:29.831 fused_ordering(754) 00:18:29.831 fused_ordering(755) 00:18:29.831 fused_ordering(756) 00:18:29.831 fused_ordering(757) 00:18:29.831 fused_ordering(758) 00:18:29.831 fused_ordering(759) 00:18:29.831 fused_ordering(760) 00:18:29.831 fused_ordering(761) 00:18:29.831 fused_ordering(762) 00:18:29.831 fused_ordering(763) 00:18:29.831 fused_ordering(764) 00:18:29.831 fused_ordering(765) 00:18:29.831 fused_ordering(766) 00:18:29.831 fused_ordering(767) 00:18:29.831 fused_ordering(768) 00:18:29.831 fused_ordering(769) 00:18:29.831 fused_ordering(770) 00:18:29.831 fused_ordering(771) 00:18:29.831 fused_ordering(772) 00:18:29.831 fused_ordering(773) 00:18:29.831 fused_ordering(774) 00:18:29.831 fused_ordering(775) 00:18:29.831 fused_ordering(776) 00:18:29.831 fused_ordering(777) 00:18:29.831 fused_ordering(778) 00:18:29.831 fused_ordering(779) 00:18:29.831 fused_ordering(780) 00:18:29.831 fused_ordering(781) 00:18:29.831 fused_ordering(782) 00:18:29.831 fused_ordering(783) 00:18:29.831 fused_ordering(784) 00:18:29.831 fused_ordering(785) 00:18:29.831 fused_ordering(786) 00:18:29.831 fused_ordering(787) 00:18:29.831 fused_ordering(788) 00:18:29.831 fused_ordering(789) 00:18:29.831 fused_ordering(790) 00:18:29.831 fused_ordering(791) 00:18:29.831 fused_ordering(792) 00:18:29.831 fused_ordering(793) 00:18:29.831 fused_ordering(794) 00:18:29.831 fused_ordering(795) 00:18:29.831 fused_ordering(796) 00:18:29.831 fused_ordering(797) 00:18:29.831 fused_ordering(798) 00:18:29.831 fused_ordering(799) 00:18:29.831 fused_ordering(800) 00:18:29.831 fused_ordering(801) 00:18:29.831 fused_ordering(802) 00:18:29.831 fused_ordering(803) 00:18:29.831 fused_ordering(804) 00:18:29.831 fused_ordering(805) 00:18:29.831 fused_ordering(806) 00:18:29.831 fused_ordering(807) 00:18:29.831 fused_ordering(808) 00:18:29.831 fused_ordering(809) 00:18:29.831 fused_ordering(810) 00:18:29.831 fused_ordering(811) 00:18:29.831 fused_ordering(812) 00:18:29.831 fused_ordering(813) 00:18:29.831 fused_ordering(814) 00:18:29.831 fused_ordering(815) 00:18:29.831 fused_ordering(816) 00:18:29.831 fused_ordering(817) 00:18:29.831 fused_ordering(818) 00:18:29.831 fused_ordering(819) 00:18:29.831 fused_ordering(820) 00:18:30.404 fused_ordering(821) 00:18:30.404 fused_ordering(822) 00:18:30.404 fused_ordering(823) 00:18:30.404 fused_ordering(824) 00:18:30.404 fused_ordering(825) 00:18:30.404 fused_ordering(826) 00:18:30.404 fused_ordering(827) 00:18:30.404 fused_ordering(828) 00:18:30.404 fused_ordering(829) 00:18:30.404 fused_ordering(830) 00:18:30.404 fused_ordering(831) 00:18:30.404 fused_ordering(832) 00:18:30.404 fused_ordering(833) 00:18:30.404 fused_ordering(834) 00:18:30.404 fused_ordering(835) 00:18:30.404 fused_ordering(836) 00:18:30.404 fused_ordering(837) 00:18:30.404 fused_ordering(838) 00:18:30.404 fused_ordering(839) 00:18:30.404 fused_ordering(840) 00:18:30.404 fused_ordering(841) 00:18:30.404 fused_ordering(842) 00:18:30.404 fused_ordering(843) 00:18:30.404 fused_ordering(844) 00:18:30.404 fused_ordering(845) 00:18:30.404 fused_ordering(846) 00:18:30.404 fused_ordering(847) 00:18:30.404 fused_ordering(848) 00:18:30.404 fused_ordering(849) 00:18:30.404 fused_ordering(850) 00:18:30.404 fused_ordering(851) 00:18:30.404 fused_ordering(852) 00:18:30.404 fused_ordering(853) 00:18:30.404 fused_ordering(854) 00:18:30.404 fused_ordering(855) 00:18:30.404 fused_ordering(856) 00:18:30.404 fused_ordering(857) 00:18:30.404 fused_ordering(858) 00:18:30.404 fused_ordering(859) 00:18:30.404 fused_ordering(860) 00:18:30.404 fused_ordering(861) 00:18:30.404 fused_ordering(862) 00:18:30.404 fused_ordering(863) 00:18:30.404 fused_ordering(864) 00:18:30.404 fused_ordering(865) 00:18:30.404 fused_ordering(866) 00:18:30.404 fused_ordering(867) 00:18:30.404 fused_ordering(868) 00:18:30.404 fused_ordering(869) 00:18:30.404 fused_ordering(870) 00:18:30.404 fused_ordering(871) 00:18:30.404 fused_ordering(872) 00:18:30.404 fused_ordering(873) 00:18:30.404 fused_ordering(874) 00:18:30.404 fused_ordering(875) 00:18:30.404 fused_ordering(876) 00:18:30.404 fused_ordering(877) 00:18:30.404 fused_ordering(878) 00:18:30.404 fused_ordering(879) 00:18:30.404 fused_ordering(880) 00:18:30.404 fused_ordering(881) 00:18:30.404 fused_ordering(882) 00:18:30.404 fused_ordering(883) 00:18:30.404 fused_ordering(884) 00:18:30.404 fused_ordering(885) 00:18:30.404 fused_ordering(886) 00:18:30.404 fused_ordering(887) 00:18:30.404 fused_ordering(888) 00:18:30.404 fused_ordering(889) 00:18:30.404 fused_ordering(890) 00:18:30.404 fused_ordering(891) 00:18:30.404 fused_ordering(892) 00:18:30.404 fused_ordering(893) 00:18:30.404 fused_ordering(894) 00:18:30.404 fused_ordering(895) 00:18:30.404 fused_ordering(896) 00:18:30.404 fused_ordering(897) 00:18:30.404 fused_ordering(898) 00:18:30.404 fused_ordering(899) 00:18:30.404 fused_ordering(900) 00:18:30.404 fused_ordering(901) 00:18:30.404 fused_ordering(902) 00:18:30.404 fused_ordering(903) 00:18:30.404 fused_ordering(904) 00:18:30.404 fused_ordering(905) 00:18:30.404 fused_ordering(906) 00:18:30.404 fused_ordering(907) 00:18:30.404 fused_ordering(908) 00:18:30.404 fused_ordering(909) 00:18:30.404 fused_ordering(910) 00:18:30.404 fused_ordering(911) 00:18:30.404 fused_ordering(912) 00:18:30.404 fused_ordering(913) 00:18:30.404 fused_ordering(914) 00:18:30.404 fused_ordering(915) 00:18:30.404 fused_ordering(916) 00:18:30.404 fused_ordering(917) 00:18:30.404 fused_ordering(918) 00:18:30.404 fused_ordering(919) 00:18:30.404 fused_ordering(920) 00:18:30.404 fused_ordering(921) 00:18:30.404 fused_ordering(922) 00:18:30.404 fused_ordering(923) 00:18:30.404 fused_ordering(924) 00:18:30.404 fused_ordering(925) 00:18:30.404 fused_ordering(926) 00:18:30.404 fused_ordering(927) 00:18:30.404 fused_ordering(928) 00:18:30.404 fused_ordering(929) 00:18:30.404 fused_ordering(930) 00:18:30.404 fused_ordering(931) 00:18:30.404 fused_ordering(932) 00:18:30.404 fused_ordering(933) 00:18:30.404 fused_ordering(934) 00:18:30.404 fused_ordering(935) 00:18:30.404 fused_ordering(936) 00:18:30.404 fused_ordering(937) 00:18:30.404 fused_ordering(938) 00:18:30.404 fused_ordering(939) 00:18:30.404 fused_ordering(940) 00:18:30.404 fused_ordering(941) 00:18:30.404 fused_ordering(942) 00:18:30.404 fused_ordering(943) 00:18:30.404 fused_ordering(944) 00:18:30.404 fused_ordering(945) 00:18:30.404 fused_ordering(946) 00:18:30.404 fused_ordering(947) 00:18:30.404 fused_ordering(948) 00:18:30.404 fused_ordering(949) 00:18:30.404 fused_ordering(950) 00:18:30.404 fused_ordering(951) 00:18:30.404 fused_ordering(952) 00:18:30.404 fused_ordering(953) 00:18:30.404 fused_ordering(954) 00:18:30.404 fused_ordering(955) 00:18:30.404 fused_ordering(956) 00:18:30.404 fused_ordering(957) 00:18:30.404 fused_ordering(958) 00:18:30.404 fused_ordering(959) 00:18:30.404 fused_ordering(960) 00:18:30.404 fused_ordering(961) 00:18:30.404 fused_ordering(962) 00:18:30.404 fused_ordering(963) 00:18:30.404 fused_ordering(964) 00:18:30.404 fused_ordering(965) 00:18:30.404 fused_ordering(966) 00:18:30.404 fused_ordering(967) 00:18:30.404 fused_ordering(968) 00:18:30.405 fused_ordering(969) 00:18:30.405 fused_ordering(970) 00:18:30.405 fused_ordering(971) 00:18:30.405 fused_ordering(972) 00:18:30.405 fused_ordering(973) 00:18:30.405 fused_ordering(974) 00:18:30.405 fused_ordering(975) 00:18:30.405 fused_ordering(976) 00:18:30.405 fused_ordering(977) 00:18:30.405 fused_ordering(978) 00:18:30.405 fused_ordering(979) 00:18:30.405 fused_ordering(980) 00:18:30.405 fused_ordering(981) 00:18:30.405 fused_ordering(982) 00:18:30.405 fused_ordering(983) 00:18:30.405 fused_ordering(984) 00:18:30.405 fused_ordering(985) 00:18:30.405 fused_ordering(986) 00:18:30.405 fused_ordering(987) 00:18:30.405 fused_ordering(988) 00:18:30.405 fused_ordering(989) 00:18:30.405 fused_ordering(990) 00:18:30.405 fused_ordering(991) 00:18:30.405 fused_ordering(992) 00:18:30.405 fused_ordering(993) 00:18:30.405 fused_ordering(994) 00:18:30.405 fused_ordering(995) 00:18:30.405 fused_ordering(996) 00:18:30.405 fused_ordering(997) 00:18:30.405 fused_ordering(998) 00:18:30.405 fused_ordering(999) 00:18:30.405 fused_ordering(1000) 00:18:30.405 fused_ordering(1001) 00:18:30.405 fused_ordering(1002) 00:18:30.405 fused_ordering(1003) 00:18:30.405 fused_ordering(1004) 00:18:30.405 fused_ordering(1005) 00:18:30.405 fused_ordering(1006) 00:18:30.405 fused_ordering(1007) 00:18:30.405 fused_ordering(1008) 00:18:30.405 fused_ordering(1009) 00:18:30.405 fused_ordering(1010) 00:18:30.405 fused_ordering(1011) 00:18:30.405 fused_ordering(1012) 00:18:30.405 fused_ordering(1013) 00:18:30.405 fused_ordering(1014) 00:18:30.405 fused_ordering(1015) 00:18:30.405 fused_ordering(1016) 00:18:30.405 fused_ordering(1017) 00:18:30.405 fused_ordering(1018) 00:18:30.405 fused_ordering(1019) 00:18:30.405 fused_ordering(1020) 00:18:30.405 fused_ordering(1021) 00:18:30.405 fused_ordering(1022) 00:18:30.405 fused_ordering(1023) 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.405 rmmod nvme_tcp 00:18:30.405 rmmod nvme_fabrics 00:18:30.405 rmmod nvme_keyring 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 308722 ']' 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 308722 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 308722 ']' 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 308722 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 308722 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 308722' 00:18:30.405 killing process with pid 308722 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 308722 00:18:30.405 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 308722 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.666 17:34:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:33.215 00:18:33.215 real 0m13.919s 00:18:33.215 user 0m7.682s 00:18:33.215 sys 0m7.239s 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.215 ************************************ 00:18:33.215 END TEST nvmf_fused_ordering 00:18:33.215 ************************************ 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.215 ************************************ 00:18:33.215 START TEST nvmf_ns_masking 00:18:33.215 ************************************ 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:33.215 * Looking for test storage... 00:18:33.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.215 --rc genhtml_branch_coverage=1 00:18:33.215 --rc genhtml_function_coverage=1 00:18:33.215 --rc genhtml_legend=1 00:18:33.215 --rc geninfo_all_blocks=1 00:18:33.215 --rc geninfo_unexecuted_blocks=1 00:18:33.215 00:18:33.215 ' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.215 --rc genhtml_branch_coverage=1 00:18:33.215 --rc genhtml_function_coverage=1 00:18:33.215 --rc genhtml_legend=1 00:18:33.215 --rc geninfo_all_blocks=1 00:18:33.215 --rc geninfo_unexecuted_blocks=1 00:18:33.215 00:18:33.215 ' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.215 --rc genhtml_branch_coverage=1 00:18:33.215 --rc genhtml_function_coverage=1 00:18:33.215 --rc genhtml_legend=1 00:18:33.215 --rc geninfo_all_blocks=1 00:18:33.215 --rc geninfo_unexecuted_blocks=1 00:18:33.215 00:18:33.215 ' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.215 --rc genhtml_branch_coverage=1 00:18:33.215 --rc genhtml_function_coverage=1 00:18:33.215 --rc genhtml_legend=1 00:18:33.215 --rc geninfo_all_blocks=1 00:18:33.215 --rc geninfo_unexecuted_blocks=1 00:18:33.215 00:18:33.215 ' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.215 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f676d414-bba3-422b-8566-3e5aa98c588d 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6d225ef8-c012-4e0b-a32f-3bf101481c3d 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ba6a1b76-a81e-4cc6-874f-12200e84d36b 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.216 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.216 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:33.216 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:33.216 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:33.216 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:41.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:41.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:41.363 Found net devices under 0000:31:00.0: cvl_0_0 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:41.363 Found net devices under 0000:31:00.1: cvl_0_1 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:41.363 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:41.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:18:41.364 00:18:41.364 --- 10.0.0.2 ping statistics --- 00:18:41.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.364 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:18:41.364 00:18:41.364 --- 10.0.0.1 ping statistics --- 00:18:41.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.364 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=313687 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 313687 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 313687 ']' 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.364 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.364 [2024-10-08 17:34:32.646369] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:18:41.364 [2024-10-08 17:34:32.646430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.364 [2024-10-08 17:34:32.735770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.364 [2024-10-08 17:34:32.831376] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.364 [2024-10-08 17:34:32.831440] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.364 [2024-10-08 17:34:32.831449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.364 [2024-10-08 17:34:32.831457] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.364 [2024-10-08 17:34:32.831463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.364 [2024-10-08 17:34:32.832268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.625 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.887 [2024-10-08 17:34:33.683181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.887 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:41.887 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:41.887 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:42.147 Malloc1 00:18:42.148 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:42.148 Malloc2 00:18:42.408 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.408 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:42.670 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.931 [2024-10-08 17:34:34.692121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ba6a1b76-a81e-4cc6-874f-12200e84d36b -a 10.0.0.2 -s 4420 -i 4 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:42.931 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:45.479 [ 0]:0x1 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:45.479 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dfa24f2edfa460f8d1d885423c4b45a 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dfa24f2edfa460f8d1d885423c4b45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:45.479 [ 0]:0x1 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dfa24f2edfa460f8d1d885423c4b45a 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dfa24f2edfa460f8d1d885423c4b45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:45.479 [ 1]:0x2 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.479 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:45.740 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ba6a1b76-a81e-4cc6-874f-12200e84d36b -a 10.0.0.2 -s 4420 -i 4 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:46.002 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:48.544 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.544 [ 0]:0x2 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.544 [ 0]:0x1 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dfa24f2edfa460f8d1d885423c4b45a 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dfa24f2edfa460f8d1d885423c4b45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.544 [ 1]:0x2 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.544 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.805 [ 0]:0x2 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.805 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.065 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:49.065 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ba6a1b76-a81e-4cc6-874f-12200e84d36b -a 10.0.0.2 -s 4420 -i 4 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:49.326 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:51.239 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:51.500 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:51.500 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:51.500 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:51.500 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.500 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.500 [ 0]:0x1 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dfa24f2edfa460f8d1d885423c4b45a 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dfa24f2edfa460f8d1d885423c4b45a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.501 [ 1]:0x2 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.501 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.762 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.763 [ 0]:0x2 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:51.763 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:52.024 [2024-10-08 17:34:43.809033] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:52.024 request: 00:18:52.024 { 00:18:52.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.024 "nsid": 2, 00:18:52.024 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.024 "method": "nvmf_ns_remove_host", 00:18:52.024 "req_id": 1 00:18:52.024 } 00:18:52.024 Got JSON-RPC error response 00:18:52.024 response: 00:18:52.024 { 00:18:52.024 "code": -32602, 00:18:52.024 "message": "Invalid parameters" 00:18:52.024 } 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:52.024 [ 0]:0x2 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7ab052a220e049098a306bd8669383f3 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7ab052a220e049098a306bd8669383f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:52.024 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=316158 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 316158 /var/tmp/host.sock 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 316158 ']' 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.024 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:52.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:52.285 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.285 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:52.285 [2024-10-08 17:34:44.071011] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:18:52.285 [2024-10-08 17:34:44.071066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316158 ] 00:18:52.285 [2024-10-08 17:34:44.149079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.285 [2024-10-08 17:34:44.212763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.227 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.227 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:53.227 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:53.227 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f676d414-bba3-422b-8566-3e5aa98c588d 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F676D414BBA3422B85663E5AA98C588D -i 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6d225ef8-c012-4e0b-a32f-3bf101481c3d 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:53.487 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6D225EF8C0124E0BA32F3BF101481C3D -i 00:18:53.747 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:54.008 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:54.008 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:54.008 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:54.270 nvme0n1 00:18:54.271 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:54.271 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:54.532 nvme1n2 00:18:54.532 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:54.532 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:54.532 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:54.532 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:54.532 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:54.792 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:54.792 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:54.792 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:54.792 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:55.053 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f676d414-bba3-422b-8566-3e5aa98c588d == \f\6\7\6\d\4\1\4\-\b\b\a\3\-\4\2\2\b\-\8\5\6\6\-\3\e\5\a\a\9\8\c\5\8\8\d ]] 00:18:55.053 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:55.053 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:55.053 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:55.053 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6d225ef8-c012-4e0b-a32f-3bf101481c3d == \6\d\2\2\5\e\f\8\-\c\0\1\2\-\4\e\0\b\-\a\3\2\f\-\3\b\f\1\0\1\4\8\1\c\3\d ]] 00:18:55.054 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 316158 00:18:55.054 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 316158 ']' 00:18:55.054 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 316158 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 316158 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 316158' 00:18:55.315 killing process with pid 316158 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 316158 00:18:55.315 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 316158 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.575 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.575 rmmod nvme_tcp 00:18:55.575 rmmod nvme_fabrics 00:18:55.575 rmmod nvme_keyring 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 313687 ']' 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 313687 ']' 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 313687' 00:18:55.835 killing process with pid 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 313687 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.835 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.836 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.836 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.836 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.386 00:18:58.386 real 0m25.143s 00:18:58.386 user 0m25.424s 00:18:58.386 sys 0m7.993s 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:58.386 ************************************ 00:18:58.386 END TEST nvmf_ns_masking 00:18:58.386 ************************************ 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.386 ************************************ 00:18:58.386 START TEST nvmf_nvme_cli 00:18:58.386 ************************************ 00:18:58.386 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:58.386 * Looking for test storage... 00:18:58.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.386 --rc genhtml_branch_coverage=1 00:18:58.386 --rc genhtml_function_coverage=1 00:18:58.386 --rc genhtml_legend=1 00:18:58.386 --rc geninfo_all_blocks=1 00:18:58.386 --rc geninfo_unexecuted_blocks=1 00:18:58.386 00:18:58.386 ' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.386 --rc genhtml_branch_coverage=1 00:18:58.386 --rc genhtml_function_coverage=1 00:18:58.386 --rc genhtml_legend=1 00:18:58.386 --rc geninfo_all_blocks=1 00:18:58.386 --rc geninfo_unexecuted_blocks=1 00:18:58.386 00:18:58.386 ' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.386 --rc genhtml_branch_coverage=1 00:18:58.386 --rc genhtml_function_coverage=1 00:18:58.386 --rc genhtml_legend=1 00:18:58.386 --rc geninfo_all_blocks=1 00:18:58.386 --rc geninfo_unexecuted_blocks=1 00:18:58.386 00:18:58.386 ' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.386 --rc genhtml_branch_coverage=1 00:18:58.386 --rc genhtml_function_coverage=1 00:18:58.386 --rc genhtml_legend=1 00:18:58.386 --rc geninfo_all_blocks=1 00:18:58.386 --rc geninfo_unexecuted_blocks=1 00:18:58.386 00:18:58.386 ' 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.386 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.387 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:06.553 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:06.553 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:06.553 Found net devices under 0000:31:00.0: cvl_0_0 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:06.553 Found net devices under 0000:31:00.1: cvl_0_1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.553 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:19:06.554 00:19:06.554 --- 10.0.0.2 ping statistics --- 00:19:06.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.554 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:19:06.554 00:19:06.554 --- 10.0.0.1 ping statistics --- 00:19:06.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.554 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=321251 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 321251 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 321251 ']' 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.554 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.554 [2024-10-08 17:34:57.915387] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:19:06.554 [2024-10-08 17:34:57.915455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.554 [2024-10-08 17:34:58.005101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.554 [2024-10-08 17:34:58.101403] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.554 [2024-10-08 17:34:58.101463] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.554 [2024-10-08 17:34:58.101476] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.554 [2024-10-08 17:34:58.101483] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.554 [2024-10-08 17:34:58.101489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.554 [2024-10-08 17:34:58.103572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.554 [2024-10-08 17:34:58.103736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.554 [2024-10-08 17:34:58.103893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.554 [2024-10-08 17:34:58.103893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.815 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.815 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:06.816 [2024-10-08 17:34:58.796143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.816 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 Malloc0 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 Malloc1 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 [2024-10-08 17:34:58.897411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.078 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:07.341 00:19:07.341 Discovery Log Number of Records 2, Generation counter 2 00:19:07.341 =====Discovery Log Entry 0====== 00:19:07.341 trtype: tcp 00:19:07.341 adrfam: ipv4 00:19:07.341 subtype: current discovery subsystem 00:19:07.341 treq: not required 00:19:07.341 portid: 0 00:19:07.341 trsvcid: 4420 00:19:07.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:07.341 traddr: 10.0.0.2 00:19:07.341 eflags: explicit discovery connections, duplicate discovery information 00:19:07.341 sectype: none 00:19:07.341 =====Discovery Log Entry 1====== 00:19:07.341 trtype: tcp 00:19:07.341 adrfam: ipv4 00:19:07.341 subtype: nvme subsystem 00:19:07.341 treq: not required 00:19:07.341 portid: 0 00:19:07.341 trsvcid: 4420 00:19:07.341 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:07.341 traddr: 10.0.0.2 00:19:07.341 eflags: none 00:19:07.341 sectype: none 00:19:07.341 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:07.342 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:08.732 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:10.651 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:10.651 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:10.651 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:10.920 /dev/nvme0n2 ]] 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:10.920 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:11.182 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.443 rmmod nvme_tcp 00:19:11.443 rmmod nvme_fabrics 00:19:11.443 rmmod nvme_keyring 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 321251 ']' 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 321251 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 321251 ']' 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 321251 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321251 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321251' 00:19:11.443 killing process with pid 321251 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 321251 00:19:11.443 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 321251 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.705 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.656 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.919 00:19:13.919 real 0m15.702s 00:19:13.919 user 0m24.056s 00:19:13.919 sys 0m6.508s 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 ************************************ 00:19:13.919 END TEST nvmf_nvme_cli 00:19:13.919 ************************************ 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 ************************************ 00:19:13.919 START TEST nvmf_vfio_user 00:19:13.919 ************************************ 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:13.919 * Looking for test storage... 00:19:13.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.919 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.181 --rc genhtml_branch_coverage=1 00:19:14.181 --rc genhtml_function_coverage=1 00:19:14.181 --rc genhtml_legend=1 00:19:14.181 --rc geninfo_all_blocks=1 00:19:14.181 --rc geninfo_unexecuted_blocks=1 00:19:14.181 00:19:14.181 ' 00:19:14.181 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.181 --rc genhtml_branch_coverage=1 00:19:14.181 --rc genhtml_function_coverage=1 00:19:14.181 --rc genhtml_legend=1 00:19:14.181 --rc geninfo_all_blocks=1 00:19:14.181 --rc geninfo_unexecuted_blocks=1 00:19:14.182 00:19:14.182 ' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.182 --rc genhtml_branch_coverage=1 00:19:14.182 --rc genhtml_function_coverage=1 00:19:14.182 --rc genhtml_legend=1 00:19:14.182 --rc geninfo_all_blocks=1 00:19:14.182 --rc geninfo_unexecuted_blocks=1 00:19:14.182 00:19:14.182 ' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.182 --rc genhtml_branch_coverage=1 00:19:14.182 --rc genhtml_function_coverage=1 00:19:14.182 --rc genhtml_legend=1 00:19:14.182 --rc geninfo_all_blocks=1 00:19:14.182 --rc geninfo_unexecuted_blocks=1 00:19:14.182 00:19:14.182 ' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=322804 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 322804' 00:19:14.182 Process pid: 322804 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 322804 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 322804 ']' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.182 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:14.182 [2024-10-08 17:35:06.033451] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:19:14.182 [2024-10-08 17:35:06.033534] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.182 [2024-10-08 17:35:06.116923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.444 [2024-10-08 17:35:06.178492] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.444 [2024-10-08 17:35:06.178531] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.444 [2024-10-08 17:35:06.178537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.444 [2024-10-08 17:35:06.178542] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.444 [2024-10-08 17:35:06.178546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.444 [2024-10-08 17:35:06.180063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.444 [2024-10-08 17:35:06.180424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.444 [2024-10-08 17:35:06.180556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.444 [2024-10-08 17:35:06.180556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.014 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.014 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:15.014 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:15.955 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:16.218 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:16.218 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:16.218 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:16.218 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:16.218 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:16.218 Malloc1 00:19:16.479 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:16.479 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:16.739 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:16.999 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:16.999 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:16.999 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:16.999 Malloc2 00:19:16.999 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:17.259 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:17.518 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:17.782 [2024-10-08 17:35:09.526065] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:19:17.782 [2024-10-08 17:35:09.526110] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323546 ] 00:19:17.782 [2024-10-08 17:35:09.554100] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:17.782 [2024-10-08 17:35:09.564909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:17.782 [2024-10-08 17:35:09.564927] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3736719000 00:19:17.782 [2024-10-08 17:35:09.565907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.566909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.567912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.568922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.569922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.570929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.571936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.572943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:17.782 [2024-10-08 17:35:09.573954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:17.782 [2024-10-08 17:35:09.573961] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f373670e000 00:19:17.782 [2024-10-08 17:35:09.574874] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:17.782 [2024-10-08 17:35:09.587325] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:17.782 [2024-10-08 17:35:09.587345] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:17.782 [2024-10-08 17:35:09.590056] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:17.782 [2024-10-08 17:35:09.590089] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:17.782 [2024-10-08 17:35:09.590155] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:17.782 [2024-10-08 17:35:09.590174] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:17.782 [2024-10-08 17:35:09.590179] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:17.782 [2024-10-08 17:35:09.591054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:17.782 [2024-10-08 17:35:09.591062] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:17.782 [2024-10-08 17:35:09.591067] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:17.782 [2024-10-08 17:35:09.592056] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:17.782 [2024-10-08 17:35:09.592063] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:17.782 [2024-10-08 17:35:09.592068] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.593059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:17.782 [2024-10-08 17:35:09.593067] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.594068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:17.782 [2024-10-08 17:35:09.594075] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:17.782 [2024-10-08 17:35:09.594078] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.594083] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.594187] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:17.782 [2024-10-08 17:35:09.594191] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.594195] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:17.782 [2024-10-08 17:35:09.595071] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:17.782 [2024-10-08 17:35:09.598979] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:17.782 [2024-10-08 17:35:09.599096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:17.782 [2024-10-08 17:35:09.600103] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:17.782 [2024-10-08 17:35:09.600170] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:17.782 [2024-10-08 17:35:09.601111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:17.782 [2024-10-08 17:35:09.601117] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:17.782 [2024-10-08 17:35:09.601121] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601137] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:17.782 [2024-10-08 17:35:09.601144] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601157] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:17.782 [2024-10-08 17:35:09.601161] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:17.782 [2024-10-08 17:35:09.601163] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.782 [2024-10-08 17:35:09.601174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:17.782 [2024-10-08 17:35:09.601212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:17.782 [2024-10-08 17:35:09.601220] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:17.782 [2024-10-08 17:35:09.601224] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:17.782 [2024-10-08 17:35:09.601227] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:17.782 [2024-10-08 17:35:09.601230] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:17.782 [2024-10-08 17:35:09.601233] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:17.782 [2024-10-08 17:35:09.601237] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:17.782 [2024-10-08 17:35:09.601240] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601246] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:17.782 [2024-10-08 17:35:09.601272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:17.782 [2024-10-08 17:35:09.601281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.782 [2024-10-08 17:35:09.601287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.782 [2024-10-08 17:35:09.601293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.782 [2024-10-08 17:35:09.601300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.782 [2024-10-08 17:35:09.601303] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601309] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:17.782 [2024-10-08 17:35:09.601323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:17.782 [2024-10-08 17:35:09.601327] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:17.782 [2024-10-08 17:35:09.601332] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601340] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601345] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:17.782 [2024-10-08 17:35:09.601364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:17.782 [2024-10-08 17:35:09.601406] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601411] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:17.782 [2024-10-08 17:35:09.601417] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:17.782 [2024-10-08 17:35:09.601420] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:17.783 [2024-10-08 17:35:09.601422] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601449] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:17.783 [2024-10-08 17:35:09.601455] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601461] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601466] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:17.783 [2024-10-08 17:35:09.601469] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:17.783 [2024-10-08 17:35:09.601471] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601505] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601511] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601516] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:17.783 [2024-10-08 17:35:09.601519] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:17.783 [2024-10-08 17:35:09.601521] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601541] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601546] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601552] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601556] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601559] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601563] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601567] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:17.783 [2024-10-08 17:35:09.601570] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:17.783 [2024-10-08 17:35:09.601573] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:17.783 [2024-10-08 17:35:09.601589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601663] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:17.783 [2024-10-08 17:35:09.601666] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:17.783 [2024-10-08 17:35:09.601669] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:17.783 [2024-10-08 17:35:09.601672] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:17.783 [2024-10-08 17:35:09.601674] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:17.783 [2024-10-08 17:35:09.601678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:17.783 [2024-10-08 17:35:09.601684] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:17.783 [2024-10-08 17:35:09.601687] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:17.783 [2024-10-08 17:35:09.601690] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601700] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:17.783 [2024-10-08 17:35:09.601703] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:17.783 [2024-10-08 17:35:09.601705] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601715] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:17.783 [2024-10-08 17:35:09.601718] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:17.783 [2024-10-08 17:35:09.601721] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:17.783 [2024-10-08 17:35:09.601725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:17.783 [2024-10-08 17:35:09.601730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:17.783 [2024-10-08 17:35:09.601752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:17.783 ===================================================== 00:19:17.783 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:17.783 ===================================================== 00:19:17.783 Controller Capabilities/Features 00:19:17.783 ================================ 00:19:17.783 Vendor ID: 4e58 00:19:17.783 Subsystem Vendor ID: 4e58 00:19:17.783 Serial Number: SPDK1 00:19:17.783 Model Number: SPDK bdev Controller 00:19:17.783 Firmware Version: 25.01 00:19:17.783 Recommended Arb Burst: 6 00:19:17.783 IEEE OUI Identifier: 8d 6b 50 00:19:17.783 Multi-path I/O 00:19:17.783 May have multiple subsystem ports: Yes 00:19:17.783 May have multiple controllers: Yes 00:19:17.783 Associated with SR-IOV VF: No 00:19:17.783 Max Data Transfer Size: 131072 00:19:17.783 Max Number of Namespaces: 32 00:19:17.783 Max Number of I/O Queues: 127 00:19:17.783 NVMe Specification Version (VS): 1.3 00:19:17.783 NVMe Specification Version (Identify): 1.3 00:19:17.783 Maximum Queue Entries: 256 00:19:17.783 Contiguous Queues Required: Yes 00:19:17.783 Arbitration Mechanisms Supported 00:19:17.783 Weighted Round Robin: Not Supported 00:19:17.783 Vendor Specific: Not Supported 00:19:17.783 Reset Timeout: 15000 ms 00:19:17.783 Doorbell Stride: 4 bytes 00:19:17.783 NVM Subsystem Reset: Not Supported 00:19:17.783 Command Sets Supported 00:19:17.783 NVM Command Set: Supported 00:19:17.783 Boot Partition: Not Supported 00:19:17.783 Memory Page Size Minimum: 4096 bytes 00:19:17.783 Memory Page Size Maximum: 4096 bytes 00:19:17.783 Persistent Memory Region: Not Supported 00:19:17.783 Optional Asynchronous Events Supported 00:19:17.783 Namespace Attribute Notices: Supported 00:19:17.783 Firmware Activation Notices: Not Supported 00:19:17.783 ANA Change Notices: Not Supported 00:19:17.783 PLE Aggregate Log Change Notices: Not Supported 00:19:17.783 LBA Status Info Alert Notices: Not Supported 00:19:17.783 EGE Aggregate Log Change Notices: Not Supported 00:19:17.783 Normal NVM Subsystem Shutdown event: Not Supported 00:19:17.783 Zone Descriptor Change Notices: Not Supported 00:19:17.783 Discovery Log Change Notices: Not Supported 00:19:17.783 Controller Attributes 00:19:17.783 128-bit Host Identifier: Supported 00:19:17.783 Non-Operational Permissive Mode: Not Supported 00:19:17.783 NVM Sets: Not Supported 00:19:17.783 Read Recovery Levels: Not Supported 00:19:17.783 Endurance Groups: Not Supported 00:19:17.783 Predictable Latency Mode: Not Supported 00:19:17.783 Traffic Based Keep ALive: Not Supported 00:19:17.783 Namespace Granularity: Not Supported 00:19:17.783 SQ Associations: Not Supported 00:19:17.783 UUID List: Not Supported 00:19:17.783 Multi-Domain Subsystem: Not Supported 00:19:17.783 Fixed Capacity Management: Not Supported 00:19:17.783 Variable Capacity Management: Not Supported 00:19:17.783 Delete Endurance Group: Not Supported 00:19:17.783 Delete NVM Set: Not Supported 00:19:17.783 Extended LBA Formats Supported: Not Supported 00:19:17.783 Flexible Data Placement Supported: Not Supported 00:19:17.783 00:19:17.784 Controller Memory Buffer Support 00:19:17.784 ================================ 00:19:17.784 Supported: No 00:19:17.784 00:19:17.784 Persistent Memory Region Support 00:19:17.784 ================================ 00:19:17.784 Supported: No 00:19:17.784 00:19:17.784 Admin Command Set Attributes 00:19:17.784 ============================ 00:19:17.784 Security Send/Receive: Not Supported 00:19:17.784 Format NVM: Not Supported 00:19:17.784 Firmware Activate/Download: Not Supported 00:19:17.784 Namespace Management: Not Supported 00:19:17.784 Device Self-Test: Not Supported 00:19:17.784 Directives: Not Supported 00:19:17.784 NVMe-MI: Not Supported 00:19:17.784 Virtualization Management: Not Supported 00:19:17.784 Doorbell Buffer Config: Not Supported 00:19:17.784 Get LBA Status Capability: Not Supported 00:19:17.784 Command & Feature Lockdown Capability: Not Supported 00:19:17.784 Abort Command Limit: 4 00:19:17.784 Async Event Request Limit: 4 00:19:17.784 Number of Firmware Slots: N/A 00:19:17.784 Firmware Slot 1 Read-Only: N/A 00:19:17.784 Firmware Activation Without Reset: N/A 00:19:17.784 Multiple Update Detection Support: N/A 00:19:17.784 Firmware Update Granularity: No Information Provided 00:19:17.784 Per-Namespace SMART Log: No 00:19:17.784 Asymmetric Namespace Access Log Page: Not Supported 00:19:17.784 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:17.784 Command Effects Log Page: Supported 00:19:17.784 Get Log Page Extended Data: Supported 00:19:17.784 Telemetry Log Pages: Not Supported 00:19:17.784 Persistent Event Log Pages: Not Supported 00:19:17.784 Supported Log Pages Log Page: May Support 00:19:17.784 Commands Supported & Effects Log Page: Not Supported 00:19:17.784 Feature Identifiers & Effects Log Page:May Support 00:19:17.784 NVMe-MI Commands & Effects Log Page: May Support 00:19:17.784 Data Area 4 for Telemetry Log: Not Supported 00:19:17.784 Error Log Page Entries Supported: 128 00:19:17.784 Keep Alive: Supported 00:19:17.784 Keep Alive Granularity: 10000 ms 00:19:17.784 00:19:17.784 NVM Command Set Attributes 00:19:17.784 ========================== 00:19:17.784 Submission Queue Entry Size 00:19:17.784 Max: 64 00:19:17.784 Min: 64 00:19:17.784 Completion Queue Entry Size 00:19:17.784 Max: 16 00:19:17.784 Min: 16 00:19:17.784 Number of Namespaces: 32 00:19:17.784 Compare Command: Supported 00:19:17.784 Write Uncorrectable Command: Not Supported 00:19:17.784 Dataset Management Command: Supported 00:19:17.784 Write Zeroes Command: Supported 00:19:17.784 Set Features Save Field: Not Supported 00:19:17.784 Reservations: Not Supported 00:19:17.784 Timestamp: Not Supported 00:19:17.784 Copy: Supported 00:19:17.784 Volatile Write Cache: Present 00:19:17.784 Atomic Write Unit (Normal): 1 00:19:17.784 Atomic Write Unit (PFail): 1 00:19:17.784 Atomic Compare & Write Unit: 1 00:19:17.784 Fused Compare & Write: Supported 00:19:17.784 Scatter-Gather List 00:19:17.784 SGL Command Set: Supported (Dword aligned) 00:19:17.784 SGL Keyed: Not Supported 00:19:17.784 SGL Bit Bucket Descriptor: Not Supported 00:19:17.784 SGL Metadata Pointer: Not Supported 00:19:17.784 Oversized SGL: Not Supported 00:19:17.784 SGL Metadata Address: Not Supported 00:19:17.784 SGL Offset: Not Supported 00:19:17.784 Transport SGL Data Block: Not Supported 00:19:17.784 Replay Protected Memory Block: Not Supported 00:19:17.784 00:19:17.784 Firmware Slot Information 00:19:17.784 ========================= 00:19:17.784 Active slot: 1 00:19:17.784 Slot 1 Firmware Revision: 25.01 00:19:17.784 00:19:17.784 00:19:17.784 Commands Supported and Effects 00:19:17.784 ============================== 00:19:17.784 Admin Commands 00:19:17.784 -------------- 00:19:17.784 Get Log Page (02h): Supported 00:19:17.784 Identify (06h): Supported 00:19:17.784 Abort (08h): Supported 00:19:17.784 Set Features (09h): Supported 00:19:17.784 Get Features (0Ah): Supported 00:19:17.784 Asynchronous Event Request (0Ch): Supported 00:19:17.784 Keep Alive (18h): Supported 00:19:17.784 I/O Commands 00:19:17.784 ------------ 00:19:17.784 Flush (00h): Supported LBA-Change 00:19:17.784 Write (01h): Supported LBA-Change 00:19:17.784 Read (02h): Supported 00:19:17.784 Compare (05h): Supported 00:19:17.784 Write Zeroes (08h): Supported LBA-Change 00:19:17.784 Dataset Management (09h): Supported LBA-Change 00:19:17.784 Copy (19h): Supported LBA-Change 00:19:17.784 00:19:17.784 Error Log 00:19:17.784 ========= 00:19:17.784 00:19:17.784 Arbitration 00:19:17.784 =========== 00:19:17.784 Arbitration Burst: 1 00:19:17.784 00:19:17.784 Power Management 00:19:17.784 ================ 00:19:17.784 Number of Power States: 1 00:19:17.784 Current Power State: Power State #0 00:19:17.784 Power State #0: 00:19:17.784 Max Power: 0.00 W 00:19:17.784 Non-Operational State: Operational 00:19:17.784 Entry Latency: Not Reported 00:19:17.784 Exit Latency: Not Reported 00:19:17.784 Relative Read Throughput: 0 00:19:17.784 Relative Read Latency: 0 00:19:17.784 Relative Write Throughput: 0 00:19:17.784 Relative Write Latency: 0 00:19:17.784 Idle Power: Not Reported 00:19:17.784 Active Power: Not Reported 00:19:17.784 Non-Operational Permissive Mode: Not Supported 00:19:17.784 00:19:17.784 Health Information 00:19:17.784 ================== 00:19:17.784 Critical Warnings: 00:19:17.784 Available Spare Space: OK 00:19:17.784 Temperature: OK 00:19:17.784 Device Reliability: OK 00:19:17.784 Read Only: No 00:19:17.784 Volatile Memory Backup: OK 00:19:17.784 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:17.784 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:17.784 Available Spare: 0% 00:19:17.784 Available Sp[2024-10-08 17:35:09.601869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:17.784 [2024-10-08 17:35:09.601879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:17.784 [2024-10-08 17:35:09.601900] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:17.784 [2024-10-08 17:35:09.601907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.784 [2024-10-08 17:35:09.601912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.784 [2024-10-08 17:35:09.601917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.784 [2024-10-08 17:35:09.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.784 [2024-10-08 17:35:09.602122] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:17.784 [2024-10-08 17:35:09.602130] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:17.784 [2024-10-08 17:35:09.603131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:17.784 [2024-10-08 17:35:09.603170] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:17.784 [2024-10-08 17:35:09.603175] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:17.784 [2024-10-08 17:35:09.604130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:17.784 [2024-10-08 17:35:09.604139] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:17.784 [2024-10-08 17:35:09.604194] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:17.784 [2024-10-08 17:35:09.605147] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:17.784 are Threshold: 0% 00:19:17.784 Life Percentage Used: 0% 00:19:17.784 Data Units Read: 0 00:19:17.784 Data Units Written: 0 00:19:17.784 Host Read Commands: 0 00:19:17.784 Host Write Commands: 0 00:19:17.784 Controller Busy Time: 0 minutes 00:19:17.784 Power Cycles: 0 00:19:17.784 Power On Hours: 0 hours 00:19:17.784 Unsafe Shutdowns: 0 00:19:17.784 Unrecoverable Media Errors: 0 00:19:17.784 Lifetime Error Log Entries: 0 00:19:17.784 Warning Temperature Time: 0 minutes 00:19:17.784 Critical Temperature Time: 0 minutes 00:19:17.784 00:19:17.784 Number of Queues 00:19:17.784 ================ 00:19:17.784 Number of I/O Submission Queues: 127 00:19:17.784 Number of I/O Completion Queues: 127 00:19:17.784 00:19:17.784 Active Namespaces 00:19:17.784 ================= 00:19:17.784 Namespace ID:1 00:19:17.784 Error Recovery Timeout: Unlimited 00:19:17.784 Command Set Identifier: NVM (00h) 00:19:17.784 Deallocate: Supported 00:19:17.784 Deallocated/Unwritten Error: Not Supported 00:19:17.784 Deallocated Read Value: Unknown 00:19:17.784 Deallocate in Write Zeroes: Not Supported 00:19:17.784 Deallocated Guard Field: 0xFFFF 00:19:17.784 Flush: Supported 00:19:17.784 Reservation: Supported 00:19:17.784 Namespace Sharing Capabilities: Multiple Controllers 00:19:17.784 Size (in LBAs): 131072 (0GiB) 00:19:17.784 Capacity (in LBAs): 131072 (0GiB) 00:19:17.784 Utilization (in LBAs): 131072 (0GiB) 00:19:17.784 NGUID: F6AE95E5A40A4EA99398C382282BC704 00:19:17.785 UUID: f6ae95e5-a40a-4ea9-9398-c382282bc704 00:19:17.785 Thin Provisioning: Not Supported 00:19:17.785 Per-NS Atomic Units: Yes 00:19:17.785 Atomic Boundary Size (Normal): 0 00:19:17.785 Atomic Boundary Size (PFail): 0 00:19:17.785 Atomic Boundary Offset: 0 00:19:17.785 Maximum Single Source Range Length: 65535 00:19:17.785 Maximum Copy Length: 65535 00:19:17.785 Maximum Source Range Count: 1 00:19:17.785 NGUID/EUI64 Never Reused: No 00:19:17.785 Namespace Write Protected: No 00:19:17.785 Number of LBA Formats: 1 00:19:17.785 Current LBA Format: LBA Format #00 00:19:17.785 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:17.785 00:19:17.785 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:18.046 [2024-10-08 17:35:09.785648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:23.335 Initializing NVMe Controllers 00:19:23.335 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:23.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:23.335 Initialization complete. Launching workers. 00:19:23.335 ======================================================== 00:19:23.335 Latency(us) 00:19:23.335 Device Information : IOPS MiB/s Average min max 00:19:23.335 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40018.40 156.32 3198.89 849.54 6805.77 00:19:23.335 ======================================================== 00:19:23.335 Total : 40018.40 156.32 3198.89 849.54 6805.77 00:19:23.335 00:19:23.335 [2024-10-08 17:35:14.806815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:23.335 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:23.335 [2024-10-08 17:35:14.977594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:28.628 Initializing NVMe Controllers 00:19:28.628 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:28.628 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:28.628 Initialization complete. Launching workers. 00:19:28.628 ======================================================== 00:19:28.628 Latency(us) 00:19:28.628 Device Information : IOPS MiB/s Average min max 00:19:28.628 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15923.20 62.20 8044.72 5986.84 15965.50 00:19:28.628 ======================================================== 00:19:28.628 Total : 15923.20 62.20 8044.72 5986.84 15965.50 00:19:28.628 00:19:28.628 [2024-10-08 17:35:20.012104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:28.628 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:28.628 [2024-10-08 17:35:20.202968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:33.916 [2024-10-08 17:35:25.267160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:33.916 Initializing NVMe Controllers 00:19:33.916 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:33.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:33.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:33.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:33.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:33.916 Initialization complete. Launching workers. 00:19:33.916 Starting thread on core 2 00:19:33.916 Starting thread on core 3 00:19:33.916 Starting thread on core 1 00:19:33.916 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:33.916 [2024-10-08 17:35:25.508357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.217 [2024-10-08 17:35:28.577241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.217 Initializing NVMe Controllers 00:19:37.217 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:37.217 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:37.217 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:37.217 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:37.217 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:37.217 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:37.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:37.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:37.217 Initialization complete. Launching workers. 00:19:37.217 Starting thread on core 1 with urgent priority queue 00:19:37.217 Starting thread on core 2 with urgent priority queue 00:19:37.217 Starting thread on core 3 with urgent priority queue 00:19:37.217 Starting thread on core 0 with urgent priority queue 00:19:37.217 SPDK bdev Controller (SPDK1 ) core 0: 3947.67 IO/s 25.33 secs/100000 ios 00:19:37.217 SPDK bdev Controller (SPDK1 ) core 1: 3867.67 IO/s 25.86 secs/100000 ios 00:19:37.217 SPDK bdev Controller (SPDK1 ) core 2: 3977.33 IO/s 25.14 secs/100000 ios 00:19:37.217 SPDK bdev Controller (SPDK1 ) core 3: 3997.67 IO/s 25.01 secs/100000 ios 00:19:37.217 ======================================================== 00:19:37.217 00:19:37.217 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:37.217 [2024-10-08 17:35:28.800370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.217 Initializing NVMe Controllers 00:19:37.217 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:37.217 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:37.217 Namespace ID: 1 size: 0GB 00:19:37.217 Initialization complete. 00:19:37.217 INFO: using host memory buffer for IO 00:19:37.217 Hello world! 00:19:37.217 [2024-10-08 17:35:28.836587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.217 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:37.217 [2024-10-08 17:35:29.059403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:38.159 Initializing NVMe Controllers 00:19:38.159 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:38.159 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:38.159 Initialization complete. Launching workers. 00:19:38.159 submit (in ns) avg, min, max = 5751.8, 2853.3, 3998210.8 00:19:38.159 complete (in ns) avg, min, max = 15614.5, 1628.3, 4993945.8 00:19:38.159 00:19:38.159 Submit histogram 00:19:38.159 ================ 00:19:38.159 Range in us Cumulative Count 00:19:38.159 2.853 - 2.867: 0.1326% ( 27) 00:19:38.159 2.867 - 2.880: 1.1783% ( 213) 00:19:38.159 2.880 - 2.893: 3.5791% ( 489) 00:19:38.159 2.893 - 2.907: 6.4857% ( 592) 00:19:38.159 2.907 - 2.920: 10.5018% ( 818) 00:19:38.159 2.920 - 2.933: 15.9760% ( 1115) 00:19:38.159 2.933 - 2.947: 22.6335% ( 1356) 00:19:38.159 2.947 - 2.960: 29.3156% ( 1361) 00:19:38.159 2.960 - 2.973: 36.7243% ( 1509) 00:19:38.159 2.973 - 2.987: 43.3621% ( 1352) 00:19:38.159 2.987 - 3.000: 50.1718% ( 1387) 00:19:38.159 3.000 - 3.013: 58.0420% ( 1603) 00:19:38.159 3.013 - 3.027: 67.5815% ( 1943) 00:19:38.159 3.027 - 3.040: 75.8837% ( 1691) 00:19:38.159 3.040 - 3.053: 82.9340% ( 1436) 00:19:38.159 3.053 - 3.067: 88.7078% ( 1176) 00:19:38.159 3.067 - 3.080: 93.0185% ( 878) 00:19:38.159 3.080 - 3.093: 95.7973% ( 566) 00:19:38.159 3.093 - 3.107: 97.3095% ( 308) 00:19:38.159 3.107 - 3.120: 98.1932% ( 180) 00:19:38.159 3.120 - 3.133: 98.6351% ( 90) 00:19:38.159 3.133 - 3.147: 98.7971% ( 33) 00:19:38.159 3.147 - 3.160: 98.8659% ( 14) 00:19:38.159 3.160 - 3.173: 98.9101% ( 9) 00:19:38.159 3.173 - 3.187: 98.9199% ( 2) 00:19:38.159 3.187 - 3.200: 98.9248% ( 1) 00:19:38.159 3.200 - 3.213: 98.9346% ( 2) 00:19:38.159 3.253 - 3.267: 98.9444% ( 2) 00:19:38.159 3.293 - 3.307: 98.9493% ( 1) 00:19:38.159 3.320 - 3.333: 98.9542% ( 1) 00:19:38.159 3.333 - 3.347: 98.9592% ( 1) 00:19:38.159 3.373 - 3.387: 98.9641% ( 1) 00:19:38.159 3.387 - 3.400: 98.9739% ( 2) 00:19:38.159 3.400 - 3.413: 98.9886% ( 3) 00:19:38.159 3.413 - 3.440: 99.0377% ( 10) 00:19:38.159 3.440 - 3.467: 99.0868% ( 10) 00:19:38.159 3.467 - 3.493: 99.1899% ( 21) 00:19:38.159 3.493 - 3.520: 99.2537% ( 13) 00:19:38.159 3.520 - 3.547: 99.4158% ( 33) 00:19:38.159 3.547 - 3.573: 99.5090% ( 19) 00:19:38.159 3.573 - 3.600: 99.5483% ( 8) 00:19:38.159 3.600 - 3.627: 99.5679% ( 4) 00:19:38.159 3.627 - 3.653: 99.5827% ( 3) 00:19:38.159 3.653 - 3.680: 99.5876% ( 1) 00:19:38.159 3.680 - 3.707: 99.5925% ( 1) 00:19:38.159 3.840 - 3.867: 99.5974% ( 1) 00:19:38.159 3.893 - 3.920: 99.6023% ( 1) 00:19:38.159 4.027 - 4.053: 99.6072% ( 1) 00:19:38.159 4.347 - 4.373: 99.6170% ( 2) 00:19:38.159 4.373 - 4.400: 99.6220% ( 1) 00:19:38.159 4.427 - 4.453: 99.6318% ( 2) 00:19:38.159 4.453 - 4.480: 99.6367% ( 1) 00:19:38.159 4.667 - 4.693: 99.6416% ( 1) 00:19:38.159 4.720 - 4.747: 99.6465% ( 1) 00:19:38.159 4.747 - 4.773: 99.6514% ( 1) 00:19:38.159 4.773 - 4.800: 99.6760% ( 5) 00:19:38.159 4.800 - 4.827: 99.6809% ( 1) 00:19:38.159 4.827 - 4.853: 99.6858% ( 1) 00:19:38.159 4.853 - 4.880: 99.6956% ( 2) 00:19:38.159 4.933 - 4.960: 99.7005% ( 1) 00:19:38.159 4.960 - 4.987: 99.7054% ( 1) 00:19:38.159 4.987 - 5.013: 99.7152% ( 2) 00:19:38.159 5.013 - 5.040: 99.7201% ( 1) 00:19:38.159 5.067 - 5.093: 99.7251% ( 1) 00:19:38.159 5.093 - 5.120: 99.7398% ( 3) 00:19:38.159 5.120 - 5.147: 99.7496% ( 2) 00:19:38.159 5.147 - 5.173: 99.7594% ( 2) 00:19:38.159 5.173 - 5.200: 99.7643% ( 1) 00:19:38.159 5.200 - 5.227: 99.7742% ( 2) 00:19:38.159 5.227 - 5.253: 99.7840% ( 2) 00:19:38.159 5.253 - 5.280: 99.7889% ( 1) 00:19:38.159 5.413 - 5.440: 99.7987% ( 2) 00:19:38.159 5.440 - 5.467: 99.8085% ( 2) 00:19:38.159 5.467 - 5.493: 99.8183% ( 2) 00:19:38.159 5.493 - 5.520: 99.8233% ( 1) 00:19:38.159 5.573 - 5.600: 99.8331% ( 2) 00:19:38.159 5.600 - 5.627: 99.8429% ( 2) 00:19:38.159 5.627 - 5.653: 99.8478% ( 1) 00:19:38.159 5.707 - 5.733: 99.8527% ( 1) 00:19:38.159 [2024-10-08 17:35:30.088388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:38.159 5.787 - 5.813: 99.8576% ( 1) 00:19:38.159 5.867 - 5.893: 99.8625% ( 1) 00:19:38.159 5.947 - 5.973: 99.8723% ( 2) 00:19:38.160 6.000 - 6.027: 99.8773% ( 1) 00:19:38.160 6.053 - 6.080: 99.8822% ( 1) 00:19:38.160 6.160 - 6.187: 99.8871% ( 1) 00:19:38.160 6.267 - 6.293: 99.8920% ( 1) 00:19:38.160 6.293 - 6.320: 99.8969% ( 1) 00:19:38.160 6.560 - 6.587: 99.9067% ( 2) 00:19:38.160 6.987 - 7.040: 99.9116% ( 1) 00:19:38.160 7.307 - 7.360: 99.9165% ( 1) 00:19:38.160 8.533 - 8.587: 99.9214% ( 1) 00:19:38.160 11.200 - 11.253: 99.9264% ( 1) 00:19:38.160 13.547 - 13.600: 99.9313% ( 1) 00:19:38.160 3986.773 - 4014.080: 100.0000% ( 14) 00:19:38.160 00:19:38.160 Complete histogram 00:19:38.160 ================== 00:19:38.160 Range in us Cumulative Count 00:19:38.160 1.627 - 1.633: 0.0147% ( 3) 00:19:38.160 1.633 - 1.640: 0.0196% ( 1) 00:19:38.160 1.640 - 1.647: 0.7610% ( 151) 00:19:38.160 1.647 - 1.653: 1.0703% ( 63) 00:19:38.160 1.653 - 1.660: 1.1243% ( 11) 00:19:38.160 1.660 - 1.667: 1.2863% ( 33) 00:19:38.160 1.667 - 1.673: 1.3207% ( 7) 00:19:38.160 1.673 - 1.680: 1.4925% ( 35) 00:19:38.160 1.680 - 1.687: 33.4299% ( 6505) 00:19:38.160 1.687 - 1.693: 43.2247% ( 1995) 00:19:38.160 1.693 - 1.700: 51.3698% ( 1659) 00:19:38.160 1.700 - 1.707: 67.2918% ( 3243) 00:19:38.160 1.707 - 1.720: 80.4939% ( 2689) 00:19:38.160 1.720 - 1.733: 83.2924% ( 570) 00:19:38.160 1.733 - 1.747: 85.5361% ( 457) 00:19:38.160 1.747 - 1.760: 90.2101% ( 952) 00:19:38.160 1.760 - 1.773: 94.8792% ( 951) 00:19:38.160 1.773 - 1.787: 97.5452% ( 543) 00:19:38.160 1.787 - 1.800: 98.6057% ( 216) 00:19:38.160 1.800 - 1.813: 98.8610% ( 52) 00:19:38.160 1.813 - 1.827: 98.8757% ( 3) 00:19:38.160 1.853 - 1.867: 98.8806% ( 1) 00:19:38.160 1.867 - 1.880: 98.8855% ( 1) 00:19:38.160 1.893 - 1.907: 98.8904% ( 1) 00:19:38.160 1.920 - 1.933: 98.9002% ( 2) 00:19:38.160 1.933 - 1.947: 98.9101% ( 2) 00:19:38.160 2.013 - 2.027: 98.9150% ( 1) 00:19:38.160 2.027 - 2.040: 98.9444% ( 6) 00:19:38.160 2.040 - 2.053: 98.9592% ( 3) 00:19:38.160 2.053 - 2.067: 99.0426% ( 17) 00:19:38.160 2.067 - 2.080: 99.1997% ( 32) 00:19:38.160 2.080 - 2.093: 99.4108% ( 43) 00:19:38.160 2.093 - 2.107: 99.4550% ( 9) 00:19:38.160 2.107 - 2.120: 99.4845% ( 6) 00:19:38.160 2.133 - 2.147: 99.5041% ( 4) 00:19:38.160 2.200 - 2.213: 99.5090% ( 1) 00:19:38.160 2.213 - 2.227: 99.5139% ( 1) 00:19:38.160 2.240 - 2.253: 99.5189% ( 1) 00:19:38.160 3.160 - 3.173: 99.5238% ( 1) 00:19:38.160 3.187 - 3.200: 99.5287% ( 1) 00:19:38.160 3.213 - 3.227: 99.5336% ( 1) 00:19:38.160 3.573 - 3.600: 99.5385% ( 1) 00:19:38.160 3.627 - 3.653: 99.5483% ( 2) 00:19:38.160 3.680 - 3.707: 99.5532% ( 1) 00:19:38.160 3.733 - 3.760: 99.5630% ( 2) 00:19:38.160 3.893 - 3.920: 99.5778% ( 3) 00:19:38.160 3.920 - 3.947: 99.5827% ( 1) 00:19:38.160 4.213 - 4.240: 99.5876% ( 1) 00:19:38.160 4.347 - 4.373: 99.5925% ( 1) 00:19:38.160 4.480 - 4.507: 99.5974% ( 1) 00:19:38.160 4.560 - 4.587: 99.6023% ( 1) 00:19:38.160 4.640 - 4.667: 99.6072% ( 1) 00:19:38.160 4.693 - 4.720: 99.6121% ( 1) 00:19:38.160 4.747 - 4.773: 99.6170% ( 1) 00:19:38.160 4.853 - 4.880: 99.6220% ( 1) 00:19:38.160 5.573 - 5.600: 99.6269% ( 1) 00:19:38.160 6.053 - 6.080: 99.6318% ( 1) 00:19:38.160 7.947 - 8.000: 99.6367% ( 1) 00:19:38.160 8.640 - 8.693: 99.6416% ( 1) 00:19:38.160 30.933 - 31.147: 99.6465% ( 1) 00:19:38.160 52.480 - 52.693: 99.6514% ( 1) 00:19:38.160 3017.387 - 3031.040: 99.6563% ( 1) 00:19:38.160 3495.253 - 3522.560: 99.6612% ( 1) 00:19:38.160 3986.773 - 4014.080: 99.9951% ( 68) 00:19:38.160 4969.813 - 4997.120: 100.0000% ( 1) 00:19:38.160 00:19:38.160 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:38.160 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:38.160 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:38.160 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:38.160 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:38.421 [ 00:19:38.421 { 00:19:38.421 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:38.421 "subtype": "Discovery", 00:19:38.421 "listen_addresses": [], 00:19:38.421 "allow_any_host": true, 00:19:38.421 "hosts": [] 00:19:38.421 }, 00:19:38.421 { 00:19:38.421 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:38.421 "subtype": "NVMe", 00:19:38.421 "listen_addresses": [ 00:19:38.421 { 00:19:38.421 "trtype": "VFIOUSER", 00:19:38.421 "adrfam": "IPv4", 00:19:38.421 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:38.421 "trsvcid": "0" 00:19:38.421 } 00:19:38.421 ], 00:19:38.421 "allow_any_host": true, 00:19:38.421 "hosts": [], 00:19:38.421 "serial_number": "SPDK1", 00:19:38.421 "model_number": "SPDK bdev Controller", 00:19:38.421 "max_namespaces": 32, 00:19:38.421 "min_cntlid": 1, 00:19:38.421 "max_cntlid": 65519, 00:19:38.421 "namespaces": [ 00:19:38.421 { 00:19:38.421 "nsid": 1, 00:19:38.421 "bdev_name": "Malloc1", 00:19:38.421 "name": "Malloc1", 00:19:38.421 "nguid": "F6AE95E5A40A4EA99398C382282BC704", 00:19:38.421 "uuid": "f6ae95e5-a40a-4ea9-9398-c382282bc704" 00:19:38.421 } 00:19:38.421 ] 00:19:38.421 }, 00:19:38.421 { 00:19:38.421 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:38.421 "subtype": "NVMe", 00:19:38.421 "listen_addresses": [ 00:19:38.421 { 00:19:38.421 "trtype": "VFIOUSER", 00:19:38.421 "adrfam": "IPv4", 00:19:38.421 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:38.421 "trsvcid": "0" 00:19:38.421 } 00:19:38.421 ], 00:19:38.421 "allow_any_host": true, 00:19:38.421 "hosts": [], 00:19:38.421 "serial_number": "SPDK2", 00:19:38.421 "model_number": "SPDK bdev Controller", 00:19:38.421 "max_namespaces": 32, 00:19:38.421 "min_cntlid": 1, 00:19:38.421 "max_cntlid": 65519, 00:19:38.421 "namespaces": [ 00:19:38.421 { 00:19:38.421 "nsid": 1, 00:19:38.421 "bdev_name": "Malloc2", 00:19:38.421 "name": "Malloc2", 00:19:38.421 "nguid": "09226F3B2C91448A8FDB18FB812C9B5C", 00:19:38.421 "uuid": "09226f3b-2c91-448a-8fdb-18fb812c9b5c" 00:19:38.421 } 00:19:38.421 ] 00:19:38.421 } 00:19:38.421 ] 00:19:38.421 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:38.421 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=327700 00:19:38.421 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:38.421 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:38.421 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:38.422 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:38.422 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:38.422 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:19:38.422 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:38.684 [2024-10-08 17:35:30.459344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:38.684 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:38.946 Malloc3 00:19:38.946 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:38.946 [2024-10-08 17:35:30.877287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:38.946 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:38.946 Asynchronous Event Request test 00:19:38.946 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:38.946 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:38.946 Registering asynchronous event callbacks... 00:19:38.946 Starting namespace attribute notice tests for all controllers... 00:19:38.946 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:38.946 aer_cb - Changed Namespace 00:19:38.946 Cleaning up... 00:19:39.207 [ 00:19:39.207 { 00:19:39.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:39.207 "subtype": "Discovery", 00:19:39.207 "listen_addresses": [], 00:19:39.207 "allow_any_host": true, 00:19:39.207 "hosts": [] 00:19:39.207 }, 00:19:39.207 { 00:19:39.207 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:39.207 "subtype": "NVMe", 00:19:39.207 "listen_addresses": [ 00:19:39.207 { 00:19:39.207 "trtype": "VFIOUSER", 00:19:39.207 "adrfam": "IPv4", 00:19:39.207 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:39.207 "trsvcid": "0" 00:19:39.207 } 00:19:39.207 ], 00:19:39.207 "allow_any_host": true, 00:19:39.207 "hosts": [], 00:19:39.207 "serial_number": "SPDK1", 00:19:39.207 "model_number": "SPDK bdev Controller", 00:19:39.207 "max_namespaces": 32, 00:19:39.207 "min_cntlid": 1, 00:19:39.207 "max_cntlid": 65519, 00:19:39.207 "namespaces": [ 00:19:39.207 { 00:19:39.207 "nsid": 1, 00:19:39.207 "bdev_name": "Malloc1", 00:19:39.207 "name": "Malloc1", 00:19:39.207 "nguid": "F6AE95E5A40A4EA99398C382282BC704", 00:19:39.207 "uuid": "f6ae95e5-a40a-4ea9-9398-c382282bc704" 00:19:39.207 }, 00:19:39.207 { 00:19:39.207 "nsid": 2, 00:19:39.207 "bdev_name": "Malloc3", 00:19:39.207 "name": "Malloc3", 00:19:39.207 "nguid": "43F93AC69ABC41EE9307A66037C37C13", 00:19:39.207 "uuid": "43f93ac6-9abc-41ee-9307-a66037c37c13" 00:19:39.207 } 00:19:39.207 ] 00:19:39.207 }, 00:19:39.207 { 00:19:39.207 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:39.207 "subtype": "NVMe", 00:19:39.207 "listen_addresses": [ 00:19:39.207 { 00:19:39.207 "trtype": "VFIOUSER", 00:19:39.207 "adrfam": "IPv4", 00:19:39.207 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:39.207 "trsvcid": "0" 00:19:39.207 } 00:19:39.207 ], 00:19:39.207 "allow_any_host": true, 00:19:39.207 "hosts": [], 00:19:39.207 "serial_number": "SPDK2", 00:19:39.207 "model_number": "SPDK bdev Controller", 00:19:39.207 "max_namespaces": 32, 00:19:39.207 "min_cntlid": 1, 00:19:39.207 "max_cntlid": 65519, 00:19:39.207 "namespaces": [ 00:19:39.207 { 00:19:39.207 "nsid": 1, 00:19:39.207 "bdev_name": "Malloc2", 00:19:39.207 "name": "Malloc2", 00:19:39.207 "nguid": "09226F3B2C91448A8FDB18FB812C9B5C", 00:19:39.207 "uuid": "09226f3b-2c91-448a-8fdb-18fb812c9b5c" 00:19:39.207 } 00:19:39.207 ] 00:19:39.207 } 00:19:39.207 ] 00:19:39.207 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 327700 00:19:39.207 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:39.207 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:39.207 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:39.207 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:39.207 [2024-10-08 17:35:31.100346] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:19:39.207 [2024-10-08 17:35:31.100368] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327804 ] 00:19:39.207 [2024-10-08 17:35:31.123999] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:39.207 [2024-10-08 17:35:31.135803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:39.207 [2024-10-08 17:35:31.135822] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f38b56cf000 00:19:39.207 [2024-10-08 17:35:31.136806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:39.207 [2024-10-08 17:35:31.137807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:39.207 [2024-10-08 17:35:31.138815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:39.207 [2024-10-08 17:35:31.139823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:39.208 [2024-10-08 17:35:31.140833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:39.208 [2024-10-08 17:35:31.141847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:39.208 [2024-10-08 17:35:31.142843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:39.208 [2024-10-08 17:35:31.143852] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:39.208 [2024-10-08 17:35:31.144862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:39.208 [2024-10-08 17:35:31.144870] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f38b56c4000 00:19:39.208 [2024-10-08 17:35:31.145783] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:39.208 [2024-10-08 17:35:31.157159] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:39.208 [2024-10-08 17:35:31.157176] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:39.208 [2024-10-08 17:35:31.162240] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:39.208 [2024-10-08 17:35:31.162274] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:39.208 [2024-10-08 17:35:31.162335] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:39.208 [2024-10-08 17:35:31.162347] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:39.208 [2024-10-08 17:35:31.162351] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:39.208 [2024-10-08 17:35:31.163245] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:39.208 [2024-10-08 17:35:31.163252] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:39.208 [2024-10-08 17:35:31.163257] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:39.208 [2024-10-08 17:35:31.164251] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:39.208 [2024-10-08 17:35:31.164258] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:39.208 [2024-10-08 17:35:31.164263] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.165255] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:39.208 [2024-10-08 17:35:31.165262] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.166264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:39.208 [2024-10-08 17:35:31.166270] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:39.208 [2024-10-08 17:35:31.166274] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.166279] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.166383] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:39.208 [2024-10-08 17:35:31.166387] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.166390] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:39.208 [2024-10-08 17:35:31.167272] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:39.208 [2024-10-08 17:35:31.168276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:39.208 [2024-10-08 17:35:31.169281] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:39.208 [2024-10-08 17:35:31.170285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:39.208 [2024-10-08 17:35:31.170316] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:39.208 [2024-10-08 17:35:31.171294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:39.208 [2024-10-08 17:35:31.171301] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:39.208 [2024-10-08 17:35:31.171304] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.171319] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:39.208 [2024-10-08 17:35:31.171324] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.171335] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:39.208 [2024-10-08 17:35:31.171339] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:39.208 [2024-10-08 17:35:31.171342] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.208 [2024-10-08 17:35:31.171351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:39.208 [2024-10-08 17:35:31.177979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:39.208 [2024-10-08 17:35:31.177988] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:39.208 [2024-10-08 17:35:31.177992] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:39.208 [2024-10-08 17:35:31.177995] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:39.208 [2024-10-08 17:35:31.177998] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:39.208 [2024-10-08 17:35:31.178002] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:39.208 [2024-10-08 17:35:31.178006] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:39.208 [2024-10-08 17:35:31.178009] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.178015] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.178024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:39.208 [2024-10-08 17:35:31.185978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:39.208 [2024-10-08 17:35:31.185988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.208 [2024-10-08 17:35:31.185994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.208 [2024-10-08 17:35:31.186000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.208 [2024-10-08 17:35:31.186006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:39.208 [2024-10-08 17:35:31.186010] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.186017] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.186025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:39.208 [2024-10-08 17:35:31.193978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:39.208 [2024-10-08 17:35:31.193984] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:39.208 [2024-10-08 17:35:31.193988] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.193994] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.193999] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:39.208 [2024-10-08 17:35:31.194005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:39.471 [2024-10-08 17:35:31.201979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:39.471 [2024-10-08 17:35:31.202026] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:39.471 [2024-10-08 17:35:31.202031] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:39.471 [2024-10-08 17:35:31.202036] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:39.471 [2024-10-08 17:35:31.202040] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:39.471 [2024-10-08 17:35:31.202042] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.471 [2024-10-08 17:35:31.202047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:39.471 [2024-10-08 17:35:31.209978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:39.471 [2024-10-08 17:35:31.209988] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:39.471 [2024-10-08 17:35:31.209996] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:39.471 [2024-10-08 17:35:31.210002] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:39.471 [2024-10-08 17:35:31.210007] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:39.471 [2024-10-08 17:35:31.210010] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:39.471 [2024-10-08 17:35:31.210012] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.472 [2024-10-08 17:35:31.210017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.217979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.217987] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.217993] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.218000] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:39.472 [2024-10-08 17:35:31.218003] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:39.472 [2024-10-08 17:35:31.218006] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.472 [2024-10-08 17:35:31.218010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.225978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.225987] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.225992] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.225997] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.226001] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.226005] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.226009] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.226012] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:39.472 [2024-10-08 17:35:31.226016] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:39.472 [2024-10-08 17:35:31.226019] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:39.472 [2024-10-08 17:35:31.226031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.233978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.233988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.241979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.241989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.249981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.249991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.257978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.257992] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:39.472 [2024-10-08 17:35:31.257996] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:39.472 [2024-10-08 17:35:31.257998] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:39.472 [2024-10-08 17:35:31.258001] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:39.472 [2024-10-08 17:35:31.258003] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:39.472 [2024-10-08 17:35:31.258009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:39.472 [2024-10-08 17:35:31.258015] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:39.472 [2024-10-08 17:35:31.258018] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:39.472 [2024-10-08 17:35:31.258020] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.472 [2024-10-08 17:35:31.258025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.258030] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:39.472 [2024-10-08 17:35:31.258033] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:39.472 [2024-10-08 17:35:31.258036] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.472 [2024-10-08 17:35:31.258040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.258045] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:39.472 [2024-10-08 17:35:31.258048] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:39.472 [2024-10-08 17:35:31.258051] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:39.472 [2024-10-08 17:35:31.258055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:39.472 [2024-10-08 17:35:31.265978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.265989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.265996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:39.472 [2024-10-08 17:35:31.266001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:39.472 ===================================================== 00:19:39.472 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:39.472 ===================================================== 00:19:39.472 Controller Capabilities/Features 00:19:39.472 ================================ 00:19:39.472 Vendor ID: 4e58 00:19:39.472 Subsystem Vendor ID: 4e58 00:19:39.472 Serial Number: SPDK2 00:19:39.472 Model Number: SPDK bdev Controller 00:19:39.472 Firmware Version: 25.01 00:19:39.472 Recommended Arb Burst: 6 00:19:39.472 IEEE OUI Identifier: 8d 6b 50 00:19:39.472 Multi-path I/O 00:19:39.472 May have multiple subsystem ports: Yes 00:19:39.472 May have multiple controllers: Yes 00:19:39.472 Associated with SR-IOV VF: No 00:19:39.472 Max Data Transfer Size: 131072 00:19:39.472 Max Number of Namespaces: 32 00:19:39.472 Max Number of I/O Queues: 127 00:19:39.472 NVMe Specification Version (VS): 1.3 00:19:39.472 NVMe Specification Version (Identify): 1.3 00:19:39.472 Maximum Queue Entries: 256 00:19:39.472 Contiguous Queues Required: Yes 00:19:39.472 Arbitration Mechanisms Supported 00:19:39.472 Weighted Round Robin: Not Supported 00:19:39.472 Vendor Specific: Not Supported 00:19:39.472 Reset Timeout: 15000 ms 00:19:39.472 Doorbell Stride: 4 bytes 00:19:39.472 NVM Subsystem Reset: Not Supported 00:19:39.472 Command Sets Supported 00:19:39.472 NVM Command Set: Supported 00:19:39.472 Boot Partition: Not Supported 00:19:39.472 Memory Page Size Minimum: 4096 bytes 00:19:39.472 Memory Page Size Maximum: 4096 bytes 00:19:39.472 Persistent Memory Region: Not Supported 00:19:39.472 Optional Asynchronous Events Supported 00:19:39.472 Namespace Attribute Notices: Supported 00:19:39.472 Firmware Activation Notices: Not Supported 00:19:39.472 ANA Change Notices: Not Supported 00:19:39.472 PLE Aggregate Log Change Notices: Not Supported 00:19:39.472 LBA Status Info Alert Notices: Not Supported 00:19:39.472 EGE Aggregate Log Change Notices: Not Supported 00:19:39.472 Normal NVM Subsystem Shutdown event: Not Supported 00:19:39.472 Zone Descriptor Change Notices: Not Supported 00:19:39.472 Discovery Log Change Notices: Not Supported 00:19:39.472 Controller Attributes 00:19:39.472 128-bit Host Identifier: Supported 00:19:39.472 Non-Operational Permissive Mode: Not Supported 00:19:39.472 NVM Sets: Not Supported 00:19:39.472 Read Recovery Levels: Not Supported 00:19:39.472 Endurance Groups: Not Supported 00:19:39.472 Predictable Latency Mode: Not Supported 00:19:39.472 Traffic Based Keep ALive: Not Supported 00:19:39.472 Namespace Granularity: Not Supported 00:19:39.472 SQ Associations: Not Supported 00:19:39.472 UUID List: Not Supported 00:19:39.472 Multi-Domain Subsystem: Not Supported 00:19:39.472 Fixed Capacity Management: Not Supported 00:19:39.472 Variable Capacity Management: Not Supported 00:19:39.472 Delete Endurance Group: Not Supported 00:19:39.472 Delete NVM Set: Not Supported 00:19:39.472 Extended LBA Formats Supported: Not Supported 00:19:39.472 Flexible Data Placement Supported: Not Supported 00:19:39.472 00:19:39.472 Controller Memory Buffer Support 00:19:39.472 ================================ 00:19:39.472 Supported: No 00:19:39.472 00:19:39.472 Persistent Memory Region Support 00:19:39.472 ================================ 00:19:39.472 Supported: No 00:19:39.472 00:19:39.472 Admin Command Set Attributes 00:19:39.472 ============================ 00:19:39.472 Security Send/Receive: Not Supported 00:19:39.472 Format NVM: Not Supported 00:19:39.472 Firmware Activate/Download: Not Supported 00:19:39.472 Namespace Management: Not Supported 00:19:39.472 Device Self-Test: Not Supported 00:19:39.472 Directives: Not Supported 00:19:39.472 NVMe-MI: Not Supported 00:19:39.472 Virtualization Management: Not Supported 00:19:39.472 Doorbell Buffer Config: Not Supported 00:19:39.472 Get LBA Status Capability: Not Supported 00:19:39.472 Command & Feature Lockdown Capability: Not Supported 00:19:39.472 Abort Command Limit: 4 00:19:39.472 Async Event Request Limit: 4 00:19:39.473 Number of Firmware Slots: N/A 00:19:39.473 Firmware Slot 1 Read-Only: N/A 00:19:39.473 Firmware Activation Without Reset: N/A 00:19:39.473 Multiple Update Detection Support: N/A 00:19:39.473 Firmware Update Granularity: No Information Provided 00:19:39.473 Per-Namespace SMART Log: No 00:19:39.473 Asymmetric Namespace Access Log Page: Not Supported 00:19:39.473 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:39.473 Command Effects Log Page: Supported 00:19:39.473 Get Log Page Extended Data: Supported 00:19:39.473 Telemetry Log Pages: Not Supported 00:19:39.473 Persistent Event Log Pages: Not Supported 00:19:39.473 Supported Log Pages Log Page: May Support 00:19:39.473 Commands Supported & Effects Log Page: Not Supported 00:19:39.473 Feature Identifiers & Effects Log Page:May Support 00:19:39.473 NVMe-MI Commands & Effects Log Page: May Support 00:19:39.473 Data Area 4 for Telemetry Log: Not Supported 00:19:39.473 Error Log Page Entries Supported: 128 00:19:39.473 Keep Alive: Supported 00:19:39.473 Keep Alive Granularity: 10000 ms 00:19:39.473 00:19:39.473 NVM Command Set Attributes 00:19:39.473 ========================== 00:19:39.473 Submission Queue Entry Size 00:19:39.473 Max: 64 00:19:39.473 Min: 64 00:19:39.473 Completion Queue Entry Size 00:19:39.473 Max: 16 00:19:39.473 Min: 16 00:19:39.473 Number of Namespaces: 32 00:19:39.473 Compare Command: Supported 00:19:39.473 Write Uncorrectable Command: Not Supported 00:19:39.473 Dataset Management Command: Supported 00:19:39.473 Write Zeroes Command: Supported 00:19:39.473 Set Features Save Field: Not Supported 00:19:39.473 Reservations: Not Supported 00:19:39.473 Timestamp: Not Supported 00:19:39.473 Copy: Supported 00:19:39.473 Volatile Write Cache: Present 00:19:39.473 Atomic Write Unit (Normal): 1 00:19:39.473 Atomic Write Unit (PFail): 1 00:19:39.473 Atomic Compare & Write Unit: 1 00:19:39.473 Fused Compare & Write: Supported 00:19:39.473 Scatter-Gather List 00:19:39.473 SGL Command Set: Supported (Dword aligned) 00:19:39.473 SGL Keyed: Not Supported 00:19:39.473 SGL Bit Bucket Descriptor: Not Supported 00:19:39.473 SGL Metadata Pointer: Not Supported 00:19:39.473 Oversized SGL: Not Supported 00:19:39.473 SGL Metadata Address: Not Supported 00:19:39.473 SGL Offset: Not Supported 00:19:39.473 Transport SGL Data Block: Not Supported 00:19:39.473 Replay Protected Memory Block: Not Supported 00:19:39.473 00:19:39.473 Firmware Slot Information 00:19:39.473 ========================= 00:19:39.473 Active slot: 1 00:19:39.473 Slot 1 Firmware Revision: 25.01 00:19:39.473 00:19:39.473 00:19:39.473 Commands Supported and Effects 00:19:39.473 ============================== 00:19:39.473 Admin Commands 00:19:39.473 -------------- 00:19:39.473 Get Log Page (02h): Supported 00:19:39.473 Identify (06h): Supported 00:19:39.473 Abort (08h): Supported 00:19:39.473 Set Features (09h): Supported 00:19:39.473 Get Features (0Ah): Supported 00:19:39.473 Asynchronous Event Request (0Ch): Supported 00:19:39.473 Keep Alive (18h): Supported 00:19:39.473 I/O Commands 00:19:39.473 ------------ 00:19:39.473 Flush (00h): Supported LBA-Change 00:19:39.473 Write (01h): Supported LBA-Change 00:19:39.473 Read (02h): Supported 00:19:39.473 Compare (05h): Supported 00:19:39.473 Write Zeroes (08h): Supported LBA-Change 00:19:39.473 Dataset Management (09h): Supported LBA-Change 00:19:39.473 Copy (19h): Supported LBA-Change 00:19:39.473 00:19:39.473 Error Log 00:19:39.473 ========= 00:19:39.473 00:19:39.473 Arbitration 00:19:39.473 =========== 00:19:39.473 Arbitration Burst: 1 00:19:39.473 00:19:39.473 Power Management 00:19:39.473 ================ 00:19:39.473 Number of Power States: 1 00:19:39.473 Current Power State: Power State #0 00:19:39.473 Power State #0: 00:19:39.473 Max Power: 0.00 W 00:19:39.473 Non-Operational State: Operational 00:19:39.473 Entry Latency: Not Reported 00:19:39.473 Exit Latency: Not Reported 00:19:39.473 Relative Read Throughput: 0 00:19:39.473 Relative Read Latency: 0 00:19:39.473 Relative Write Throughput: 0 00:19:39.473 Relative Write Latency: 0 00:19:39.473 Idle Power: Not Reported 00:19:39.473 Active Power: Not Reported 00:19:39.473 Non-Operational Permissive Mode: Not Supported 00:19:39.473 00:19:39.473 Health Information 00:19:39.473 ================== 00:19:39.473 Critical Warnings: 00:19:39.473 Available Spare Space: OK 00:19:39.473 Temperature: OK 00:19:39.473 Device Reliability: OK 00:19:39.473 Read Only: No 00:19:39.473 Volatile Memory Backup: OK 00:19:39.473 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:39.473 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:39.473 Available Spare: 0% 00:19:39.473 Available Sp[2024-10-08 17:35:31.266070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:39.473 [2024-10-08 17:35:31.273978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:39.473 [2024-10-08 17:35:31.274001] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:39.473 [2024-10-08 17:35:31.274008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.473 [2024-10-08 17:35:31.274013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.473 [2024-10-08 17:35:31.274017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.473 [2024-10-08 17:35:31.274022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.473 [2024-10-08 17:35:31.274058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:39.473 [2024-10-08 17:35:31.274066] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:39.473 [2024-10-08 17:35:31.275067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:39.473 [2024-10-08 17:35:31.275103] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:39.473 [2024-10-08 17:35:31.275111] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:39.473 [2024-10-08 17:35:31.276072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:39.473 [2024-10-08 17:35:31.276080] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:39.473 [2024-10-08 17:35:31.276121] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:39.473 [2024-10-08 17:35:31.277091] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:39.473 are Threshold: 0% 00:19:39.473 Life Percentage Used: 0% 00:19:39.473 Data Units Read: 0 00:19:39.473 Data Units Written: 0 00:19:39.473 Host Read Commands: 0 00:19:39.473 Host Write Commands: 0 00:19:39.473 Controller Busy Time: 0 minutes 00:19:39.473 Power Cycles: 0 00:19:39.473 Power On Hours: 0 hours 00:19:39.473 Unsafe Shutdowns: 0 00:19:39.473 Unrecoverable Media Errors: 0 00:19:39.473 Lifetime Error Log Entries: 0 00:19:39.473 Warning Temperature Time: 0 minutes 00:19:39.473 Critical Temperature Time: 0 minutes 00:19:39.473 00:19:39.473 Number of Queues 00:19:39.473 ================ 00:19:39.473 Number of I/O Submission Queues: 127 00:19:39.473 Number of I/O Completion Queues: 127 00:19:39.473 00:19:39.473 Active Namespaces 00:19:39.473 ================= 00:19:39.473 Namespace ID:1 00:19:39.473 Error Recovery Timeout: Unlimited 00:19:39.473 Command Set Identifier: NVM (00h) 00:19:39.473 Deallocate: Supported 00:19:39.473 Deallocated/Unwritten Error: Not Supported 00:19:39.473 Deallocated Read Value: Unknown 00:19:39.473 Deallocate in Write Zeroes: Not Supported 00:19:39.473 Deallocated Guard Field: 0xFFFF 00:19:39.473 Flush: Supported 00:19:39.473 Reservation: Supported 00:19:39.473 Namespace Sharing Capabilities: Multiple Controllers 00:19:39.473 Size (in LBAs): 131072 (0GiB) 00:19:39.473 Capacity (in LBAs): 131072 (0GiB) 00:19:39.473 Utilization (in LBAs): 131072 (0GiB) 00:19:39.473 NGUID: 09226F3B2C91448A8FDB18FB812C9B5C 00:19:39.473 UUID: 09226f3b-2c91-448a-8fdb-18fb812c9b5c 00:19:39.473 Thin Provisioning: Not Supported 00:19:39.473 Per-NS Atomic Units: Yes 00:19:39.473 Atomic Boundary Size (Normal): 0 00:19:39.473 Atomic Boundary Size (PFail): 0 00:19:39.473 Atomic Boundary Offset: 0 00:19:39.473 Maximum Single Source Range Length: 65535 00:19:39.473 Maximum Copy Length: 65535 00:19:39.473 Maximum Source Range Count: 1 00:19:39.473 NGUID/EUI64 Never Reused: No 00:19:39.473 Namespace Write Protected: No 00:19:39.473 Number of LBA Formats: 1 00:19:39.473 Current LBA Format: LBA Format #00 00:19:39.473 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:39.473 00:19:39.473 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:39.473 [2024-10-08 17:35:31.424357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:44.767 Initializing NVMe Controllers 00:19:44.767 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:44.767 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:44.767 Initialization complete. Launching workers. 00:19:44.767 ======================================================== 00:19:44.767 Latency(us) 00:19:44.767 Device Information : IOPS MiB/s Average min max 00:19:44.767 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39985.66 156.19 3200.96 841.99 9783.77 00:19:44.767 ======================================================== 00:19:44.767 Total : 39985.66 156.19 3200.96 841.99 9783.77 00:19:44.767 00:19:44.767 [2024-10-08 17:35:36.532171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:44.767 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:44.767 [2024-10-08 17:35:36.711710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:50.058 Initializing NVMe Controllers 00:19:50.058 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:50.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:50.058 Initialization complete. Launching workers. 00:19:50.058 ======================================================== 00:19:50.058 Latency(us) 00:19:50.058 Device Information : IOPS MiB/s Average min max 00:19:50.058 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39970.38 156.13 3202.05 841.64 6797.56 00:19:50.058 ======================================================== 00:19:50.058 Total : 39970.38 156.13 3202.05 841.64 6797.56 00:19:50.058 00:19:50.058 [2024-10-08 17:35:41.731438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:50.058 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:50.058 [2024-10-08 17:35:41.910519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:55.351 [2024-10-08 17:35:47.049065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:55.351 Initializing NVMe Controllers 00:19:55.351 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:55.351 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:55.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:55.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:55.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:55.351 Initialization complete. Launching workers. 00:19:55.351 Starting thread on core 2 00:19:55.351 Starting thread on core 3 00:19:55.351 Starting thread on core 1 00:19:55.351 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:55.351 [2024-10-08 17:35:47.283392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:58.653 [2024-10-08 17:35:50.351428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:58.653 Initializing NVMe Controllers 00:19:58.653 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:58.653 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:58.653 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:58.653 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:58.653 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:58.653 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:58.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:58.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:58.653 Initialization complete. Launching workers. 00:19:58.653 Starting thread on core 1 with urgent priority queue 00:19:58.653 Starting thread on core 2 with urgent priority queue 00:19:58.653 Starting thread on core 3 with urgent priority queue 00:19:58.653 Starting thread on core 0 with urgent priority queue 00:19:58.653 SPDK bdev Controller (SPDK2 ) core 0: 16998.33 IO/s 5.88 secs/100000 ios 00:19:58.653 SPDK bdev Controller (SPDK2 ) core 1: 14813.33 IO/s 6.75 secs/100000 ios 00:19:58.653 SPDK bdev Controller (SPDK2 ) core 2: 13412.00 IO/s 7.46 secs/100000 ios 00:19:58.653 SPDK bdev Controller (SPDK2 ) core 3: 9563.67 IO/s 10.46 secs/100000 ios 00:19:58.653 ======================================================== 00:19:58.653 00:19:58.653 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:58.653 [2024-10-08 17:35:50.576117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:58.653 Initializing NVMe Controllers 00:19:58.653 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:58.653 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:58.653 Namespace ID: 1 size: 0GB 00:19:58.653 Initialization complete. 00:19:58.653 INFO: using host memory buffer for IO 00:19:58.653 Hello world! 00:19:58.653 [2024-10-08 17:35:50.588183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:58.653 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:58.915 [2024-10-08 17:35:50.812681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:00.303 Initializing NVMe Controllers 00:20:00.303 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:00.303 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:00.303 Initialization complete. Launching workers. 00:20:00.303 submit (in ns) avg, min, max = 5890.1, 2819.2, 4000275.8 00:20:00.303 complete (in ns) avg, min, max = 15913.4, 1639.2, 3998191.7 00:20:00.303 00:20:00.303 Submit histogram 00:20:00.303 ================ 00:20:00.303 Range in us Cumulative Count 00:20:00.303 2.813 - 2.827: 0.2289% ( 47) 00:20:00.303 2.827 - 2.840: 0.9255% ( 143) 00:20:00.303 2.840 - 2.853: 2.5426% ( 332) 00:20:00.303 2.853 - 2.867: 5.3726% ( 581) 00:20:00.303 2.867 - 2.880: 9.4204% ( 831) 00:20:00.303 2.880 - 2.893: 14.9050% ( 1126) 00:20:00.303 2.893 - 2.907: 20.8281% ( 1216) 00:20:00.303 2.907 - 2.920: 26.8290% ( 1232) 00:20:00.303 2.920 - 2.933: 32.7375% ( 1213) 00:20:00.303 2.933 - 2.947: 39.0648% ( 1299) 00:20:00.303 2.947 - 2.960: 45.8256% ( 1388) 00:20:00.303 2.960 - 2.973: 53.0248% ( 1478) 00:20:00.303 2.973 - 2.987: 61.1154% ( 1661) 00:20:00.303 2.987 - 3.000: 71.0180% ( 2033) 00:20:00.303 3.000 - 3.013: 79.9708% ( 1838) 00:20:00.303 3.013 - 3.027: 86.2299% ( 1285) 00:20:00.303 3.027 - 3.040: 91.0960% ( 999) 00:20:00.303 3.040 - 3.053: 94.2669% ( 651) 00:20:00.303 3.053 - 3.067: 96.2348% ( 404) 00:20:00.303 3.067 - 3.080: 97.4623% ( 252) 00:20:00.303 3.080 - 3.093: 98.3926% ( 191) 00:20:00.303 3.093 - 3.107: 98.8115% ( 86) 00:20:00.303 3.107 - 3.120: 98.9430% ( 27) 00:20:00.303 3.120 - 3.133: 98.9674% ( 5) 00:20:00.303 3.133 - 3.147: 98.9820% ( 3) 00:20:00.303 3.147 - 3.160: 98.9917% ( 2) 00:20:00.303 3.227 - 3.240: 98.9966% ( 1) 00:20:00.303 3.240 - 3.253: 99.0015% ( 1) 00:20:00.303 3.333 - 3.347: 99.0161% ( 3) 00:20:00.303 3.347 - 3.360: 99.0404% ( 5) 00:20:00.303 3.360 - 3.373: 99.0453% ( 1) 00:20:00.303 3.373 - 3.387: 99.0697% ( 5) 00:20:00.303 3.387 - 3.400: 99.0843% ( 3) 00:20:00.303 3.400 - 3.413: 99.1135% ( 6) 00:20:00.303 3.413 - 3.440: 99.1719% ( 12) 00:20:00.303 3.440 - 3.467: 99.2207% ( 10) 00:20:00.303 3.467 - 3.493: 99.2791% ( 12) 00:20:00.303 3.493 - 3.520: 99.3668% ( 18) 00:20:00.303 3.520 - 3.547: 99.4447% ( 16) 00:20:00.303 3.547 - 3.573: 99.5324% ( 18) 00:20:00.303 3.573 - 3.600: 99.5762% ( 9) 00:20:00.303 3.600 - 3.627: 99.5860% ( 2) 00:20:00.303 3.627 - 3.653: 99.5908% ( 1) 00:20:00.303 3.707 - 3.733: 99.5957% ( 1) 00:20:00.303 3.787 - 3.813: 99.6055% ( 2) 00:20:00.303 3.947 - 3.973: 99.6103% ( 1) 00:20:00.303 3.973 - 4.000: 99.6152% ( 1) 00:20:00.303 4.427 - 4.453: 99.6201% ( 1) 00:20:00.303 4.480 - 4.507: 99.6298% ( 2) 00:20:00.303 4.507 - 4.533: 99.6396% ( 2) 00:20:00.303 4.533 - 4.560: 99.6493% ( 2) 00:20:00.303 4.560 - 4.587: 99.6542% ( 1) 00:20:00.303 4.587 - 4.613: 99.6590% ( 1) 00:20:00.303 4.613 - 4.640: 99.6688% ( 2) 00:20:00.303 4.640 - 4.667: 99.6736% ( 1) 00:20:00.303 4.693 - 4.720: 99.6785% ( 1) 00:20:00.303 4.720 - 4.747: 99.6883% ( 2) 00:20:00.303 4.747 - 4.773: 99.6980% ( 2) 00:20:00.303 4.800 - 4.827: 99.7029% ( 1) 00:20:00.303 4.827 - 4.853: 99.7077% ( 1) 00:20:00.303 4.880 - 4.907: 99.7126% ( 1) 00:20:00.303 4.933 - 4.960: 99.7224% ( 2) 00:20:00.303 4.987 - 5.013: 99.7418% ( 4) 00:20:00.303 5.067 - 5.093: 99.7565% ( 3) 00:20:00.303 5.093 - 5.120: 99.7711% ( 3) 00:20:00.303 5.120 - 5.147: 99.7759% ( 1) 00:20:00.303 5.173 - 5.200: 99.7808% ( 1) 00:20:00.303 5.200 - 5.227: 99.7857% ( 1) 00:20:00.303 5.227 - 5.253: 99.7906% ( 1) 00:20:00.303 5.280 - 5.307: 99.8003% ( 2) 00:20:00.303 5.333 - 5.360: 99.8052% ( 1) 00:20:00.303 5.360 - 5.387: 99.8100% ( 1) 00:20:00.303 5.387 - 5.413: 99.8149% ( 1) 00:20:00.303 5.467 - 5.493: 99.8198% ( 1) 00:20:00.303 5.493 - 5.520: 99.8295% ( 2) 00:20:00.303 5.573 - 5.600: 99.8344% ( 1) 00:20:00.303 5.653 - 5.680: 99.8393% ( 1) 00:20:00.303 5.680 - 5.707: 99.8441% ( 1) 00:20:00.303 5.760 - 5.787: 99.8539% ( 2) 00:20:00.303 [2024-10-08 17:35:51.906529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:00.303 5.813 - 5.840: 99.8587% ( 1) 00:20:00.303 5.920 - 5.947: 99.8636% ( 1) 00:20:00.303 5.947 - 5.973: 99.8685% ( 1) 00:20:00.303 5.973 - 6.000: 99.8734% ( 1) 00:20:00.303 6.027 - 6.053: 99.8782% ( 1) 00:20:00.303 6.053 - 6.080: 99.8831% ( 1) 00:20:00.303 6.133 - 6.160: 99.8928% ( 2) 00:20:00.303 6.213 - 6.240: 99.8977% ( 1) 00:20:00.303 6.267 - 6.293: 99.9075% ( 2) 00:20:00.303 6.373 - 6.400: 99.9123% ( 1) 00:20:00.303 6.400 - 6.427: 99.9172% ( 1) 00:20:00.303 6.827 - 6.880: 99.9221% ( 1) 00:20:00.303 7.787 - 7.840: 99.9269% ( 1) 00:20:00.303 3986.773 - 4014.080: 100.0000% ( 15) 00:20:00.303 00:20:00.303 Complete histogram 00:20:00.303 ================== 00:20:00.303 Range in us Cumulative Count 00:20:00.303 1.633 - 1.640: 0.0097% ( 2) 00:20:00.303 1.640 - 1.647: 0.5066% ( 102) 00:20:00.303 1.647 - 1.653: 0.6819% ( 36) 00:20:00.303 1.653 - 1.660: 0.7404% ( 12) 00:20:00.303 1.660 - 1.667: 0.8232% ( 17) 00:20:00.303 1.667 - 1.673: 0.8475% ( 5) 00:20:00.303 1.673 - 1.680: 1.1642% ( 65) 00:20:00.303 1.680 - 1.687: 39.0453% ( 7777) 00:20:00.303 1.687 - 1.693: 46.9264% ( 1618) 00:20:00.303 1.693 - 1.700: 54.9635% ( 1650) 00:20:00.303 1.700 - 1.707: 69.7175% ( 3029) 00:20:00.303 1.707 - 1.720: 81.2177% ( 2361) 00:20:00.303 1.720 - 1.733: 83.3463% ( 437) 00:20:00.303 1.733 - 1.747: 85.3824% ( 418) 00:20:00.303 1.747 - 1.760: 89.9269% ( 933) 00:20:00.303 1.760 - 1.773: 94.7686% ( 994) 00:20:00.303 1.773 - 1.787: 97.5889% ( 579) 00:20:00.303 1.787 - 1.800: 98.5679% ( 201) 00:20:00.303 1.800 - 1.813: 98.8407% ( 56) 00:20:00.303 1.813 - 1.827: 98.8699% ( 6) 00:20:00.303 1.827 - 1.840: 98.8846% ( 3) 00:20:00.303 1.907 - 1.920: 98.8894% ( 1) 00:20:00.303 1.960 - 1.973: 98.8943% ( 1) 00:20:00.303 1.987 - 2.000: 98.8992% ( 1) 00:20:00.303 2.013 - 2.027: 98.9040% ( 1) 00:20:00.303 2.027 - 2.040: 98.9138% ( 2) 00:20:00.303 2.040 - 2.053: 98.9284% ( 3) 00:20:00.303 2.053 - 2.067: 99.0112% ( 17) 00:20:00.303 2.067 - 2.080: 99.1330% ( 25) 00:20:00.303 2.080 - 2.093: 99.2791% ( 30) 00:20:00.303 2.093 - 2.107: 99.3911% ( 23) 00:20:00.303 2.107 - 2.120: 99.4398% ( 10) 00:20:00.303 2.120 - 2.133: 99.4496% ( 2) 00:20:00.303 2.133 - 2.147: 99.4593% ( 2) 00:20:00.303 2.160 - 2.173: 99.4642% ( 1) 00:20:00.303 3.200 - 3.213: 99.4691% ( 1) 00:20:00.303 3.293 - 3.307: 99.4739% ( 1) 00:20:00.303 3.347 - 3.360: 99.4837% ( 2) 00:20:00.303 3.373 - 3.387: 99.4886% ( 1) 00:20:00.303 3.413 - 3.440: 99.4934% ( 1) 00:20:00.303 3.493 - 3.520: 99.4983% ( 1) 00:20:00.303 3.520 - 3.547: 99.5080% ( 2) 00:20:00.303 3.573 - 3.600: 99.5129% ( 1) 00:20:00.303 3.600 - 3.627: 99.5178% ( 1) 00:20:00.303 3.787 - 3.813: 99.5324% ( 3) 00:20:00.303 3.813 - 3.840: 99.5373% ( 1) 00:20:00.303 4.000 - 4.027: 99.5421% ( 1) 00:20:00.304 4.107 - 4.133: 99.5470% ( 1) 00:20:00.304 4.133 - 4.160: 99.5519% ( 1) 00:20:00.304 4.187 - 4.213: 99.5567% ( 1) 00:20:00.304 4.320 - 4.347: 99.5616% ( 1) 00:20:00.304 4.347 - 4.373: 99.5665% ( 1) 00:20:00.304 4.400 - 4.427: 99.5762% ( 2) 00:20:00.304 4.480 - 4.507: 99.5908% ( 3) 00:20:00.304 4.533 - 4.560: 99.5957% ( 1) 00:20:00.304 4.613 - 4.640: 99.6055% ( 2) 00:20:00.304 5.040 - 5.067: 99.6103% ( 1) 00:20:00.304 5.200 - 5.227: 99.6152% ( 1) 00:20:00.304 5.413 - 5.440: 99.6201% ( 1) 00:20:00.304 6.373 - 6.400: 99.6249% ( 1) 00:20:00.304 10.987 - 11.040: 99.6298% ( 1) 00:20:00.304 31.360 - 31.573: 99.6347% ( 1) 00:20:00.304 34.133 - 34.347: 99.6396% ( 1) 00:20:00.304 40.107 - 40.320: 99.6444% ( 1) 00:20:00.304 3932.160 - 3959.467: 99.6493% ( 1) 00:20:00.304 3986.773 - 4014.080: 100.0000% ( 72) 00:20:00.304 00:20:00.304 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:00.304 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:00.304 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:00.304 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:00.304 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:00.304 [ 00:20:00.304 { 00:20:00.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:00.304 "subtype": "Discovery", 00:20:00.304 "listen_addresses": [], 00:20:00.304 "allow_any_host": true, 00:20:00.304 "hosts": [] 00:20:00.304 }, 00:20:00.304 { 00:20:00.304 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:00.304 "subtype": "NVMe", 00:20:00.304 "listen_addresses": [ 00:20:00.304 { 00:20:00.304 "trtype": "VFIOUSER", 00:20:00.304 "adrfam": "IPv4", 00:20:00.304 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:00.304 "trsvcid": "0" 00:20:00.304 } 00:20:00.304 ], 00:20:00.304 "allow_any_host": true, 00:20:00.304 "hosts": [], 00:20:00.304 "serial_number": "SPDK1", 00:20:00.304 "model_number": "SPDK bdev Controller", 00:20:00.304 "max_namespaces": 32, 00:20:00.304 "min_cntlid": 1, 00:20:00.304 "max_cntlid": 65519, 00:20:00.304 "namespaces": [ 00:20:00.304 { 00:20:00.304 "nsid": 1, 00:20:00.304 "bdev_name": "Malloc1", 00:20:00.304 "name": "Malloc1", 00:20:00.304 "nguid": "F6AE95E5A40A4EA99398C382282BC704", 00:20:00.304 "uuid": "f6ae95e5-a40a-4ea9-9398-c382282bc704" 00:20:00.304 }, 00:20:00.304 { 00:20:00.304 "nsid": 2, 00:20:00.304 "bdev_name": "Malloc3", 00:20:00.304 "name": "Malloc3", 00:20:00.304 "nguid": "43F93AC69ABC41EE9307A66037C37C13", 00:20:00.304 "uuid": "43f93ac6-9abc-41ee-9307-a66037c37c13" 00:20:00.304 } 00:20:00.304 ] 00:20:00.304 }, 00:20:00.304 { 00:20:00.304 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:00.304 "subtype": "NVMe", 00:20:00.304 "listen_addresses": [ 00:20:00.304 { 00:20:00.304 "trtype": "VFIOUSER", 00:20:00.304 "adrfam": "IPv4", 00:20:00.304 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:00.304 "trsvcid": "0" 00:20:00.304 } 00:20:00.304 ], 00:20:00.304 "allow_any_host": true, 00:20:00.304 "hosts": [], 00:20:00.304 "serial_number": "SPDK2", 00:20:00.304 "model_number": "SPDK bdev Controller", 00:20:00.304 "max_namespaces": 32, 00:20:00.304 "min_cntlid": 1, 00:20:00.304 "max_cntlid": 65519, 00:20:00.304 "namespaces": [ 00:20:00.304 { 00:20:00.304 "nsid": 1, 00:20:00.304 "bdev_name": "Malloc2", 00:20:00.304 "name": "Malloc2", 00:20:00.304 "nguid": "09226F3B2C91448A8FDB18FB812C9B5C", 00:20:00.304 "uuid": "09226f3b-2c91-448a-8fdb-18fb812c9b5c" 00:20:00.304 } 00:20:00.304 ] 00:20:00.304 } 00:20:00.304 ] 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=331839 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:20:00.304 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:00.304 [2024-10-08 17:35:52.270354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:00.566 Malloc4 00:20:00.566 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:00.827 [2024-10-08 17:35:52.674117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:00.827 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:00.827 Asynchronous Event Request test 00:20:00.827 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:00.827 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:00.827 Registering asynchronous event callbacks... 00:20:00.827 Starting namespace attribute notice tests for all controllers... 00:20:00.827 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:00.827 aer_cb - Changed Namespace 00:20:00.827 Cleaning up... 00:20:01.088 [ 00:20:01.088 { 00:20:01.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:01.088 "subtype": "Discovery", 00:20:01.088 "listen_addresses": [], 00:20:01.088 "allow_any_host": true, 00:20:01.088 "hosts": [] 00:20:01.088 }, 00:20:01.088 { 00:20:01.088 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:01.088 "subtype": "NVMe", 00:20:01.088 "listen_addresses": [ 00:20:01.088 { 00:20:01.088 "trtype": "VFIOUSER", 00:20:01.088 "adrfam": "IPv4", 00:20:01.088 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:01.088 "trsvcid": "0" 00:20:01.088 } 00:20:01.088 ], 00:20:01.088 "allow_any_host": true, 00:20:01.088 "hosts": [], 00:20:01.088 "serial_number": "SPDK1", 00:20:01.088 "model_number": "SPDK bdev Controller", 00:20:01.088 "max_namespaces": 32, 00:20:01.088 "min_cntlid": 1, 00:20:01.088 "max_cntlid": 65519, 00:20:01.088 "namespaces": [ 00:20:01.088 { 00:20:01.088 "nsid": 1, 00:20:01.088 "bdev_name": "Malloc1", 00:20:01.088 "name": "Malloc1", 00:20:01.088 "nguid": "F6AE95E5A40A4EA99398C382282BC704", 00:20:01.088 "uuid": "f6ae95e5-a40a-4ea9-9398-c382282bc704" 00:20:01.088 }, 00:20:01.088 { 00:20:01.088 "nsid": 2, 00:20:01.088 "bdev_name": "Malloc3", 00:20:01.088 "name": "Malloc3", 00:20:01.088 "nguid": "43F93AC69ABC41EE9307A66037C37C13", 00:20:01.088 "uuid": "43f93ac6-9abc-41ee-9307-a66037c37c13" 00:20:01.088 } 00:20:01.088 ] 00:20:01.088 }, 00:20:01.088 { 00:20:01.088 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:01.088 "subtype": "NVMe", 00:20:01.088 "listen_addresses": [ 00:20:01.088 { 00:20:01.088 "trtype": "VFIOUSER", 00:20:01.088 "adrfam": "IPv4", 00:20:01.088 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:01.088 "trsvcid": "0" 00:20:01.088 } 00:20:01.088 ], 00:20:01.088 "allow_any_host": true, 00:20:01.088 "hosts": [], 00:20:01.088 "serial_number": "SPDK2", 00:20:01.088 "model_number": "SPDK bdev Controller", 00:20:01.088 "max_namespaces": 32, 00:20:01.088 "min_cntlid": 1, 00:20:01.088 "max_cntlid": 65519, 00:20:01.088 "namespaces": [ 00:20:01.088 { 00:20:01.088 "nsid": 1, 00:20:01.088 "bdev_name": "Malloc2", 00:20:01.088 "name": "Malloc2", 00:20:01.088 "nguid": "09226F3B2C91448A8FDB18FB812C9B5C", 00:20:01.088 "uuid": "09226f3b-2c91-448a-8fdb-18fb812c9b5c" 00:20:01.088 }, 00:20:01.088 { 00:20:01.088 "nsid": 2, 00:20:01.088 "bdev_name": "Malloc4", 00:20:01.088 "name": "Malloc4", 00:20:01.088 "nguid": "46D4D0A680D440B2AC201FFBD79E1CAC", 00:20:01.088 "uuid": "46d4d0a6-80d4-40b2-ac20-1ffbd79e1cac" 00:20:01.088 } 00:20:01.088 ] 00:20:01.088 } 00:20:01.088 ] 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 331839 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 322804 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 322804 ']' 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 322804 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322804 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322804' 00:20:01.088 killing process with pid 322804 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 322804 00:20:01.088 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 322804 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=332171 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 332171' 00:20:01.349 Process pid: 332171 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 332171 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 332171 ']' 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.349 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:01.349 [2024-10-08 17:35:53.122156] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:01.349 [2024-10-08 17:35:53.122847] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:20:01.349 [2024-10-08 17:35:53.122879] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.349 [2024-10-08 17:35:53.191950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.349 [2024-10-08 17:35:53.245480] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.349 [2024-10-08 17:35:53.245518] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.349 [2024-10-08 17:35:53.245525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.349 [2024-10-08 17:35:53.245529] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.349 [2024-10-08 17:35:53.245534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.349 [2024-10-08 17:35:53.246800] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.349 [2024-10-08 17:35:53.246951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.349 [2024-10-08 17:35:53.247148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.349 [2024-10-08 17:35:53.247269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.350 [2024-10-08 17:35:53.308961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:01.350 [2024-10-08 17:35:53.310032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:01.350 [2024-10-08 17:35:53.310562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:01.350 [2024-10-08 17:35:53.311049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:01.350 [2024-10-08 17:35:53.311090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:01.610 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.610 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:01.610 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:02.553 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:02.815 Malloc1 00:20:02.815 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:03.076 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:03.337 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:03.337 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:03.337 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:03.337 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:03.598 Malloc2 00:20:03.598 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:03.858 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:03.858 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 332171 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 332171 ']' 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 332171 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.119 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332171 00:20:04.119 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.119 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.119 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332171' 00:20:04.119 killing process with pid 332171 00:20:04.119 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 332171 00:20:04.119 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 332171 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:04.382 00:20:04.382 real 0m50.456s 00:20:04.382 user 3m15.544s 00:20:04.382 sys 0m2.568s 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:04.382 ************************************ 00:20:04.382 END TEST nvmf_vfio_user 00:20:04.382 ************************************ 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.382 ************************************ 00:20:04.382 START TEST nvmf_vfio_user_nvme_compliance 00:20:04.382 ************************************ 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:04.382 * Looking for test storage... 00:20:04.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:20:04.382 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:04.644 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:04.644 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.644 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.644 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:04.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.645 --rc genhtml_branch_coverage=1 00:20:04.645 --rc genhtml_function_coverage=1 00:20:04.645 --rc genhtml_legend=1 00:20:04.645 --rc geninfo_all_blocks=1 00:20:04.645 --rc geninfo_unexecuted_blocks=1 00:20:04.645 00:20:04.645 ' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:04.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.645 --rc genhtml_branch_coverage=1 00:20:04.645 --rc genhtml_function_coverage=1 00:20:04.645 --rc genhtml_legend=1 00:20:04.645 --rc geninfo_all_blocks=1 00:20:04.645 --rc geninfo_unexecuted_blocks=1 00:20:04.645 00:20:04.645 ' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:04.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.645 --rc genhtml_branch_coverage=1 00:20:04.645 --rc genhtml_function_coverage=1 00:20:04.645 --rc genhtml_legend=1 00:20:04.645 --rc geninfo_all_blocks=1 00:20:04.645 --rc geninfo_unexecuted_blocks=1 00:20:04.645 00:20:04.645 ' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:04.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.645 --rc genhtml_branch_coverage=1 00:20:04.645 --rc genhtml_function_coverage=1 00:20:04.645 --rc genhtml_legend=1 00:20:04.645 --rc geninfo_all_blocks=1 00:20:04.645 --rc geninfo_unexecuted_blocks=1 00:20:04.645 00:20:04.645 ' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.645 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=332918 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 332918' 00:20:04.646 Process pid: 332918 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 332918 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 332918 ']' 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.646 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:04.646 [2024-10-08 17:35:56.558312] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:20:04.646 [2024-10-08 17:35:56.558384] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.907 [2024-10-08 17:35:56.637963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.907 [2024-10-08 17:35:56.699217] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.907 [2024-10-08 17:35:56.699252] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.907 [2024-10-08 17:35:56.699259] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.907 [2024-10-08 17:35:56.699264] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.907 [2024-10-08 17:35:56.699268] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.907 [2024-10-08 17:35:56.700222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.907 [2024-10-08 17:35:56.700427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.907 [2024-10-08 17:35:56.700428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.480 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.480 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:20:05.480 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 malloc0 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.424 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.686 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:06.686 00:20:06.686 00:20:06.686 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.686 http://cunit.sourceforge.net/ 00:20:06.686 00:20:06.686 00:20:06.686 Suite: nvme_compliance 00:20:06.686 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 17:35:58.594383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:06.686 [2024-10-08 17:35:58.595677] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:06.686 [2024-10-08 17:35:58.595688] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:06.686 [2024-10-08 17:35:58.595693] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:06.686 [2024-10-08 17:35:58.597400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:06.686 passed 00:20:06.686 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 17:35:58.675904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:06.686 [2024-10-08 17:35:58.678924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:06.947 passed 00:20:06.948 Test: admin_identify_ns ...[2024-10-08 17:35:58.754439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:06.948 [2024-10-08 17:35:58.813980] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:06.948 [2024-10-08 17:35:58.821983] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:06.948 [2024-10-08 17:35:58.843060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:06.948 passed 00:20:06.948 Test: admin_get_features_mandatory_features ...[2024-10-08 17:35:58.917270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:06.948 [2024-10-08 17:35:58.920287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.210 passed 00:20:07.210 Test: admin_get_features_optional_features ...[2024-10-08 17:35:59.000739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.210 [2024-10-08 17:35:59.003765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.210 passed 00:20:07.210 Test: admin_set_features_number_of_queues ...[2024-10-08 17:35:59.077487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.210 [2024-10-08 17:35:59.183074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.479 passed 00:20:07.479 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 17:35:59.256265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.479 [2024-10-08 17:35:59.259287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.479 passed 00:20:07.479 Test: admin_get_log_page_with_lpo ...[2024-10-08 17:35:59.334015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.479 [2024-10-08 17:35:59.402984] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:07.479 [2024-10-08 17:35:59.416015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.479 passed 00:20:07.746 Test: fabric_property_get ...[2024-10-08 17:35:59.491353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.746 [2024-10-08 17:35:59.492552] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:07.746 [2024-10-08 17:35:59.494378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.746 passed 00:20:07.746 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 17:35:59.570838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.746 [2024-10-08 17:35:59.572032] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:07.746 [2024-10-08 17:35:59.573854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:07.746 passed 00:20:07.746 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 17:35:59.649579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:07.746 [2024-10-08 17:35:59.733984] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:08.009 [2024-10-08 17:35:59.749982] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:08.009 [2024-10-08 17:35:59.755054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.009 passed 00:20:08.009 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 17:35:59.828286] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:08.009 [2024-10-08 17:35:59.829488] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:08.009 [2024-10-08 17:35:59.831306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.009 passed 00:20:08.009 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 17:35:59.905039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:08.009 [2024-10-08 17:35:59.982980] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:08.271 [2024-10-08 17:36:00.005984] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:08.271 [2024-10-08 17:36:00.011047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.271 passed 00:20:08.271 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 17:36:00.084264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:08.271 [2024-10-08 17:36:00.085464] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:08.271 [2024-10-08 17:36:00.085482] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:08.271 [2024-10-08 17:36:00.087289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.271 passed 00:20:08.271 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 17:36:00.164017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:08.271 [2024-10-08 17:36:00.256979] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:08.532 [2024-10-08 17:36:00.264980] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:08.532 [2024-10-08 17:36:00.272978] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:08.532 [2024-10-08 17:36:00.280980] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:08.532 [2024-10-08 17:36:00.310042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.532 passed 00:20:08.532 Test: admin_create_io_sq_verify_pc ...[2024-10-08 17:36:00.384213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:08.532 [2024-10-08 17:36:00.400986] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:08.532 [2024-10-08 17:36:00.418384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:08.532 passed 00:20:08.532 Test: admin_create_io_qp_max_qps ...[2024-10-08 17:36:00.493828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.920 [2024-10-08 17:36:01.596985] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:10.181 [2024-10-08 17:36:01.979691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.181 passed 00:20:10.181 Test: admin_create_io_sq_shared_cq ...[2024-10-08 17:36:02.055648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.442 [2024-10-08 17:36:02.187987] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:10.442 [2024-10-08 17:36:02.225027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.442 passed 00:20:10.442 00:20:10.442 Run Summary: Type Total Ran Passed Failed Inactive 00:20:10.442 suites 1 1 n/a 0 0 00:20:10.442 tests 18 18 18 0 0 00:20:10.442 asserts 360 360 360 0 n/a 00:20:10.442 00:20:10.442 Elapsed time = 1.492 seconds 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 332918 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 332918 ']' 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 332918 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332918 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332918' 00:20:10.442 killing process with pid 332918 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 332918 00:20:10.442 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 332918 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:10.704 00:20:10.704 real 0m6.203s 00:20:10.704 user 0m17.489s 00:20:10.704 sys 0m0.541s 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:10.704 ************************************ 00:20:10.704 END TEST nvmf_vfio_user_nvme_compliance 00:20:10.704 ************************************ 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:10.704 ************************************ 00:20:10.704 START TEST nvmf_vfio_user_fuzz 00:20:10.704 ************************************ 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:10.704 * Looking for test storage... 00:20:10.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:20:10.704 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.966 --rc genhtml_branch_coverage=1 00:20:10.966 --rc genhtml_function_coverage=1 00:20:10.966 --rc genhtml_legend=1 00:20:10.966 --rc geninfo_all_blocks=1 00:20:10.966 --rc geninfo_unexecuted_blocks=1 00:20:10.966 00:20:10.966 ' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.966 --rc genhtml_branch_coverage=1 00:20:10.966 --rc genhtml_function_coverage=1 00:20:10.966 --rc genhtml_legend=1 00:20:10.966 --rc geninfo_all_blocks=1 00:20:10.966 --rc geninfo_unexecuted_blocks=1 00:20:10.966 00:20:10.966 ' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.966 --rc genhtml_branch_coverage=1 00:20:10.966 --rc genhtml_function_coverage=1 00:20:10.966 --rc genhtml_legend=1 00:20:10.966 --rc geninfo_all_blocks=1 00:20:10.966 --rc geninfo_unexecuted_blocks=1 00:20:10.966 00:20:10.966 ' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.966 --rc genhtml_branch_coverage=1 00:20:10.966 --rc genhtml_function_coverage=1 00:20:10.966 --rc genhtml_legend=1 00:20:10.966 --rc geninfo_all_blocks=1 00:20:10.966 --rc geninfo_unexecuted_blocks=1 00:20:10.966 00:20:10.966 ' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.966 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:10.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=334211 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 334211' 00:20:10.967 Process pid: 334211 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 334211 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 334211 ']' 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.967 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:11.909 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.909 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:11.909 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 malloc0 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:12.852 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:44.967 Fuzzing completed. Shutting down the fuzz application 00:20:44.967 00:20:44.967 Dumping successful admin opcodes: 00:20:44.967 8, 9, 10, 24, 00:20:44.967 Dumping successful io opcodes: 00:20:44.967 0, 00:20:44.967 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1337702, total successful commands: 5243, random_seed: 3911180224 00:20:44.967 NS: 0x200003a1ef00 admin qp, Total commands completed: 296171, total successful commands: 2390, random_seed: 2782255552 00:20:44.967 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:44.967 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.967 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:44.967 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.967 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 334211 ']' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334211' 00:20:44.968 killing process with pid 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 334211 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:44.968 00:20:44.968 real 0m32.826s 00:20:44.968 user 0m38.498s 00:20:44.968 sys 0m23.273s 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:44.968 ************************************ 00:20:44.968 END TEST nvmf_vfio_user_fuzz 00:20:44.968 ************************************ 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.968 ************************************ 00:20:44.968 START TEST nvmf_auth_target 00:20:44.968 ************************************ 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:44.968 * Looking for test storage... 00:20:44.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:44.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.968 --rc genhtml_branch_coverage=1 00:20:44.968 --rc genhtml_function_coverage=1 00:20:44.968 --rc genhtml_legend=1 00:20:44.968 --rc geninfo_all_blocks=1 00:20:44.968 --rc geninfo_unexecuted_blocks=1 00:20:44.968 00:20:44.968 ' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:44.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.968 --rc genhtml_branch_coverage=1 00:20:44.968 --rc genhtml_function_coverage=1 00:20:44.968 --rc genhtml_legend=1 00:20:44.968 --rc geninfo_all_blocks=1 00:20:44.968 --rc geninfo_unexecuted_blocks=1 00:20:44.968 00:20:44.968 ' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:44.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.968 --rc genhtml_branch_coverage=1 00:20:44.968 --rc genhtml_function_coverage=1 00:20:44.968 --rc genhtml_legend=1 00:20:44.968 --rc geninfo_all_blocks=1 00:20:44.968 --rc geninfo_unexecuted_blocks=1 00:20:44.968 00:20:44.968 ' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:44.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.968 --rc genhtml_branch_coverage=1 00:20:44.968 --rc genhtml_function_coverage=1 00:20:44.968 --rc genhtml_legend=1 00:20:44.968 --rc geninfo_all_blocks=1 00:20:44.968 --rc geninfo_unexecuted_blocks=1 00:20:44.968 00:20:44.968 ' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.968 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.969 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.563 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.563 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.563 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.563 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.563 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:20:51.563 00:20:51.563 --- 10.0.0.2 ping statistics --- 00:20:51.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.563 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:20:51.563 00:20:51.563 --- 10.0.0.1 ping statistics --- 00:20:51.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.563 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:20:51.563 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=344918 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 344918 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 344918 ']' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.564 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=344951 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1115d40aeef79d75cfd2ae4228838baed698df694ea4ed86 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.7GS 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1115d40aeef79d75cfd2ae4228838baed698df694ea4ed86 0 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1115d40aeef79d75cfd2ae4228838baed698df694ea4ed86 0 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1115d40aeef79d75cfd2ae4228838baed698df694ea4ed86 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.7GS 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.7GS 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.7GS 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6a346e2e2174815d1c499b1f39c52608dcb3a5c559c287ca4a9de204c561e4cd 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.LgI 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6a346e2e2174815d1c499b1f39c52608dcb3a5c559c287ca4a9de204c561e4cd 3 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6a346e2e2174815d1c499b1f39c52608dcb3a5c559c287ca4a9de204c561e4cd 3 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6a346e2e2174815d1c499b1f39c52608dcb3a5c559c287ca4a9de204c561e4cd 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.LgI 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.LgI 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.LgI 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=543b138d0bf6a6370c4ddc71a00d2b4b 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.w18 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 543b138d0bf6a6370c4ddc71a00d2b4b 1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 543b138d0bf6a6370c4ddc71a00d2b4b 1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=543b138d0bf6a6370c4ddc71a00d2b4b 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.w18 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.w18 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.w18 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:52.511 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d80add0f530fe75eb086ed6b8dbb0ec24665f010e89c3950 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.wua 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d80add0f530fe75eb086ed6b8dbb0ec24665f010e89c3950 2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d80add0f530fe75eb086ed6b8dbb0ec24665f010e89c3950 2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d80add0f530fe75eb086ed6b8dbb0ec24665f010e89c3950 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.wua 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.wua 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.wua 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=70a5f9427853217c94c48a2d10f40c3ffbcdc06c965c09b2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.O0t 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 70a5f9427853217c94c48a2d10f40c3ffbcdc06c965c09b2 2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 70a5f9427853217c94c48a2d10f40c3ffbcdc06c965c09b2 2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=70a5f9427853217c94c48a2d10f40c3ffbcdc06c965c09b2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.O0t 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.O0t 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.O0t 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b099bc44557b3883f8dd0715178f445f 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.7aq 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b099bc44557b3883f8dd0715178f445f 1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b099bc44557b3883f8dd0715178f445f 1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b099bc44557b3883f8dd0715178f445f 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.7aq 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.7aq 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7aq 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b7ee3b8e58af0c91a561b9ff630a141b6bbbab36e6ad6d8e9c0184871a2ea642 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.oTc 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b7ee3b8e58af0c91a561b9ff630a141b6bbbab36e6ad6d8e9c0184871a2ea642 3 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b7ee3b8e58af0c91a561b9ff630a141b6bbbab36e6ad6d8e9c0184871a2ea642 3 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b7ee3b8e58af0c91a561b9ff630a141b6bbbab36e6ad6d8e9c0184871a2ea642 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:52.774 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.oTc 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.oTc 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oTc 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 344918 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 344918 ']' 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 344951 /var/tmp/host.sock 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 344951 ']' 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7GS 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.7GS 00:20:53.298 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.7GS 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.LgI ]] 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LgI 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LgI 00:20:53.560 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LgI 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.w18 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.w18 00:20:53.822 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.w18 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.wua ]] 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wua 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wua 00:20:54.083 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wua 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O0t 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.O0t 00:20:54.083 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.O0t 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7aq ]] 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7aq 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7aq 00:20:54.345 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7aq 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oTc 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oTc 00:20:54.606 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oTc 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:54.868 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.129 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.130 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.130 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.391 { 00:20:55.391 "cntlid": 1, 00:20:55.391 "qid": 0, 00:20:55.391 "state": "enabled", 00:20:55.391 "thread": "nvmf_tgt_poll_group_000", 00:20:55.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:55.391 "listen_address": { 00:20:55.391 "trtype": "TCP", 00:20:55.391 "adrfam": "IPv4", 00:20:55.391 "traddr": "10.0.0.2", 00:20:55.391 "trsvcid": "4420" 00:20:55.391 }, 00:20:55.391 "peer_address": { 00:20:55.391 "trtype": "TCP", 00:20:55.391 "adrfam": "IPv4", 00:20:55.391 "traddr": "10.0.0.1", 00:20:55.391 "trsvcid": "37120" 00:20:55.391 }, 00:20:55.391 "auth": { 00:20:55.391 "state": "completed", 00:20:55.391 "digest": "sha256", 00:20:55.391 "dhgroup": "null" 00:20:55.391 } 00:20:55.391 } 00:20:55.391 ]' 00:20:55.391 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.652 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.913 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:20:55.913 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.120 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.120 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.380 { 00:21:00.380 "cntlid": 3, 00:21:00.380 "qid": 0, 00:21:00.380 "state": "enabled", 00:21:00.380 "thread": "nvmf_tgt_poll_group_000", 00:21:00.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:00.380 "listen_address": { 00:21:00.380 "trtype": "TCP", 00:21:00.380 "adrfam": "IPv4", 00:21:00.380 "traddr": "10.0.0.2", 00:21:00.380 "trsvcid": "4420" 00:21:00.380 }, 00:21:00.380 "peer_address": { 00:21:00.380 "trtype": "TCP", 00:21:00.380 "adrfam": "IPv4", 00:21:00.380 "traddr": "10.0.0.1", 00:21:00.380 "trsvcid": "37158" 00:21:00.380 }, 00:21:00.380 "auth": { 00:21:00.380 "state": "completed", 00:21:00.380 "digest": "sha256", 00:21:00.380 "dhgroup": "null" 00:21:00.380 } 00:21:00.380 } 00:21:00.380 ]' 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.380 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.381 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.381 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.381 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.640 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:00.640 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.211 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.734 00:21:01.734 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.734 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.734 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.996 { 00:21:01.996 "cntlid": 5, 00:21:01.996 "qid": 0, 00:21:01.996 "state": "enabled", 00:21:01.996 "thread": "nvmf_tgt_poll_group_000", 00:21:01.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:01.996 "listen_address": { 00:21:01.996 "trtype": "TCP", 00:21:01.996 "adrfam": "IPv4", 00:21:01.996 "traddr": "10.0.0.2", 00:21:01.996 "trsvcid": "4420" 00:21:01.996 }, 00:21:01.996 "peer_address": { 00:21:01.996 "trtype": "TCP", 00:21:01.996 "adrfam": "IPv4", 00:21:01.996 "traddr": "10.0.0.1", 00:21:01.996 "trsvcid": "37184" 00:21:01.996 }, 00:21:01.996 "auth": { 00:21:01.996 "state": "completed", 00:21:01.996 "digest": "sha256", 00:21:01.996 "dhgroup": "null" 00:21:01.996 } 00:21:01.996 } 00:21:01.996 ]' 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.996 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.256 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:02.257 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:02.828 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.089 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.350 00:21:03.350 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.350 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.350 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.350 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.611 { 00:21:03.611 "cntlid": 7, 00:21:03.611 "qid": 0, 00:21:03.611 "state": "enabled", 00:21:03.611 "thread": "nvmf_tgt_poll_group_000", 00:21:03.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:03.611 "listen_address": { 00:21:03.611 "trtype": "TCP", 00:21:03.611 "adrfam": "IPv4", 00:21:03.611 "traddr": "10.0.0.2", 00:21:03.611 "trsvcid": "4420" 00:21:03.611 }, 00:21:03.611 "peer_address": { 00:21:03.611 "trtype": "TCP", 00:21:03.611 "adrfam": "IPv4", 00:21:03.611 "traddr": "10.0.0.1", 00:21:03.611 "trsvcid": "50184" 00:21:03.611 }, 00:21:03.611 "auth": { 00:21:03.611 "state": "completed", 00:21:03.611 "digest": "sha256", 00:21:03.611 "dhgroup": "null" 00:21:03.611 } 00:21:03.611 } 00:21:03.611 ]' 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.611 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.872 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:03.872 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.443 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.705 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.966 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.966 { 00:21:04.966 "cntlid": 9, 00:21:04.966 "qid": 0, 00:21:04.966 "state": "enabled", 00:21:04.966 "thread": "nvmf_tgt_poll_group_000", 00:21:04.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.966 "listen_address": { 00:21:04.966 "trtype": "TCP", 00:21:04.966 "adrfam": "IPv4", 00:21:04.966 "traddr": "10.0.0.2", 00:21:04.966 "trsvcid": "4420" 00:21:04.966 }, 00:21:04.966 "peer_address": { 00:21:04.966 "trtype": "TCP", 00:21:04.966 "adrfam": "IPv4", 00:21:04.966 "traddr": "10.0.0.1", 00:21:04.966 "trsvcid": "50222" 00:21:04.966 }, 00:21:04.966 "auth": { 00:21:04.966 "state": "completed", 00:21:04.966 "digest": "sha256", 00:21:04.966 "dhgroup": "ffdhe2048" 00:21:04.966 } 00:21:04.966 } 00:21:04.966 ]' 00:21:04.966 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.227 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.227 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.227 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.227 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.227 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.227 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.227 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.487 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:05.487 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.058 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.319 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.319 00:21:06.580 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.580 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.580 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.580 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.580 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.581 { 00:21:06.581 "cntlid": 11, 00:21:06.581 "qid": 0, 00:21:06.581 "state": "enabled", 00:21:06.581 "thread": "nvmf_tgt_poll_group_000", 00:21:06.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:06.581 "listen_address": { 00:21:06.581 "trtype": "TCP", 00:21:06.581 "adrfam": "IPv4", 00:21:06.581 "traddr": "10.0.0.2", 00:21:06.581 "trsvcid": "4420" 00:21:06.581 }, 00:21:06.581 "peer_address": { 00:21:06.581 "trtype": "TCP", 00:21:06.581 "adrfam": "IPv4", 00:21:06.581 "traddr": "10.0.0.1", 00:21:06.581 "trsvcid": "50252" 00:21:06.581 }, 00:21:06.581 "auth": { 00:21:06.581 "state": "completed", 00:21:06.581 "digest": "sha256", 00:21:06.581 "dhgroup": "ffdhe2048" 00:21:06.581 } 00:21:06.581 } 00:21:06.581 ]' 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.581 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:06.841 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.783 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.044 00:21:08.044 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.044 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.044 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.305 { 00:21:08.305 "cntlid": 13, 00:21:08.305 "qid": 0, 00:21:08.305 "state": "enabled", 00:21:08.305 "thread": "nvmf_tgt_poll_group_000", 00:21:08.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:08.305 "listen_address": { 00:21:08.305 "trtype": "TCP", 00:21:08.305 "adrfam": "IPv4", 00:21:08.305 "traddr": "10.0.0.2", 00:21:08.305 "trsvcid": "4420" 00:21:08.305 }, 00:21:08.305 "peer_address": { 00:21:08.305 "trtype": "TCP", 00:21:08.305 "adrfam": "IPv4", 00:21:08.305 "traddr": "10.0.0.1", 00:21:08.305 "trsvcid": "50284" 00:21:08.305 }, 00:21:08.305 "auth": { 00:21:08.305 "state": "completed", 00:21:08.305 "digest": "sha256", 00:21:08.305 "dhgroup": "ffdhe2048" 00:21:08.305 } 00:21:08.305 } 00:21:08.305 ]' 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.305 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.566 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.566 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.566 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.567 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:08.567 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:09.137 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.399 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.659 00:21:09.659 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.659 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.659 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.924 { 00:21:09.924 "cntlid": 15, 00:21:09.924 "qid": 0, 00:21:09.924 "state": "enabled", 00:21:09.924 "thread": "nvmf_tgt_poll_group_000", 00:21:09.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.924 "listen_address": { 00:21:09.924 "trtype": "TCP", 00:21:09.924 "adrfam": "IPv4", 00:21:09.924 "traddr": "10.0.0.2", 00:21:09.924 "trsvcid": "4420" 00:21:09.924 }, 00:21:09.924 "peer_address": { 00:21:09.924 "trtype": "TCP", 00:21:09.924 "adrfam": "IPv4", 00:21:09.924 "traddr": "10.0.0.1", 00:21:09.924 "trsvcid": "50316" 00:21:09.924 }, 00:21:09.924 "auth": { 00:21:09.924 "state": "completed", 00:21:09.924 "digest": "sha256", 00:21:09.924 "dhgroup": "ffdhe2048" 00:21:09.924 } 00:21:09.924 } 00:21:09.924 ]' 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.924 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.184 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.184 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.184 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.184 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:10.184 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:10.755 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.017 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.277 00:21:11.277 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.277 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.277 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.540 { 00:21:11.540 "cntlid": 17, 00:21:11.540 "qid": 0, 00:21:11.540 "state": "enabled", 00:21:11.540 "thread": "nvmf_tgt_poll_group_000", 00:21:11.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.540 "listen_address": { 00:21:11.540 "trtype": "TCP", 00:21:11.540 "adrfam": "IPv4", 00:21:11.540 "traddr": "10.0.0.2", 00:21:11.540 "trsvcid": "4420" 00:21:11.540 }, 00:21:11.540 "peer_address": { 00:21:11.540 "trtype": "TCP", 00:21:11.540 "adrfam": "IPv4", 00:21:11.540 "traddr": "10.0.0.1", 00:21:11.540 "trsvcid": "50342" 00:21:11.540 }, 00:21:11.540 "auth": { 00:21:11.540 "state": "completed", 00:21:11.540 "digest": "sha256", 00:21:11.540 "dhgroup": "ffdhe3072" 00:21:11.540 } 00:21:11.540 } 00:21:11.540 ]' 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.540 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.801 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.801 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.801 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.801 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:11.801 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.742 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.005 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.005 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.264 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.264 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.264 { 00:21:13.264 "cntlid": 19, 00:21:13.264 "qid": 0, 00:21:13.264 "state": "enabled", 00:21:13.264 "thread": "nvmf_tgt_poll_group_000", 00:21:13.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.264 "listen_address": { 00:21:13.264 "trtype": "TCP", 00:21:13.264 "adrfam": "IPv4", 00:21:13.264 "traddr": "10.0.0.2", 00:21:13.264 "trsvcid": "4420" 00:21:13.264 }, 00:21:13.264 "peer_address": { 00:21:13.264 "trtype": "TCP", 00:21:13.264 "adrfam": "IPv4", 00:21:13.264 "traddr": "10.0.0.1", 00:21:13.264 "trsvcid": "40376" 00:21:13.264 }, 00:21:13.264 "auth": { 00:21:13.264 "state": "completed", 00:21:13.264 "digest": "sha256", 00:21:13.264 "dhgroup": "ffdhe3072" 00:21:13.264 } 00:21:13.264 } 00:21:13.264 ]' 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.265 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.527 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:13.527 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.099 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.360 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.361 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.622 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.622 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.622 { 00:21:14.622 "cntlid": 21, 00:21:14.622 "qid": 0, 00:21:14.622 "state": "enabled", 00:21:14.622 "thread": "nvmf_tgt_poll_group_000", 00:21:14.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.622 "listen_address": { 00:21:14.622 "trtype": "TCP", 00:21:14.622 "adrfam": "IPv4", 00:21:14.622 "traddr": "10.0.0.2", 00:21:14.622 "trsvcid": "4420" 00:21:14.622 }, 00:21:14.622 "peer_address": { 00:21:14.622 "trtype": "TCP", 00:21:14.622 "adrfam": "IPv4", 00:21:14.622 "traddr": "10.0.0.1", 00:21:14.622 "trsvcid": "40406" 00:21:14.622 }, 00:21:14.622 "auth": { 00:21:14.622 "state": "completed", 00:21:14.622 "digest": "sha256", 00:21:14.622 "dhgroup": "ffdhe3072" 00:21:14.622 } 00:21:14.622 } 00:21:14.622 ]' 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.882 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.143 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:15.143 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:15.716 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.977 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.238 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.238 { 00:21:16.238 "cntlid": 23, 00:21:16.238 "qid": 0, 00:21:16.238 "state": "enabled", 00:21:16.238 "thread": "nvmf_tgt_poll_group_000", 00:21:16.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.238 "listen_address": { 00:21:16.238 "trtype": "TCP", 00:21:16.238 "adrfam": "IPv4", 00:21:16.238 "traddr": "10.0.0.2", 00:21:16.238 "trsvcid": "4420" 00:21:16.238 }, 00:21:16.238 "peer_address": { 00:21:16.238 "trtype": "TCP", 00:21:16.238 "adrfam": "IPv4", 00:21:16.238 "traddr": "10.0.0.1", 00:21:16.238 "trsvcid": "40454" 00:21:16.238 }, 00:21:16.238 "auth": { 00:21:16.238 "state": "completed", 00:21:16.238 "digest": "sha256", 00:21:16.238 "dhgroup": "ffdhe3072" 00:21:16.238 } 00:21:16.238 } 00:21:16.238 ]' 00:21:16.238 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.499 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.760 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:16.761 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.335 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.596 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.858 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.858 { 00:21:17.858 "cntlid": 25, 00:21:17.858 "qid": 0, 00:21:17.858 "state": "enabled", 00:21:17.858 "thread": "nvmf_tgt_poll_group_000", 00:21:17.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:17.858 "listen_address": { 00:21:17.858 "trtype": "TCP", 00:21:17.858 "adrfam": "IPv4", 00:21:17.858 "traddr": "10.0.0.2", 00:21:17.858 "trsvcid": "4420" 00:21:17.858 }, 00:21:17.858 "peer_address": { 00:21:17.858 "trtype": "TCP", 00:21:17.858 "adrfam": "IPv4", 00:21:17.858 "traddr": "10.0.0.1", 00:21:17.858 "trsvcid": "40498" 00:21:17.858 }, 00:21:17.858 "auth": { 00:21:17.858 "state": "completed", 00:21:17.858 "digest": "sha256", 00:21:17.858 "dhgroup": "ffdhe4096" 00:21:17.858 } 00:21:17.858 } 00:21:17.858 ]' 00:21:17.858 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.118 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.380 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:18.380 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.953 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.214 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.474 00:21:19.474 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.474 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.474 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.735 { 00:21:19.735 "cntlid": 27, 00:21:19.735 "qid": 0, 00:21:19.735 "state": "enabled", 00:21:19.735 "thread": "nvmf_tgt_poll_group_000", 00:21:19.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:19.735 "listen_address": { 00:21:19.735 "trtype": "TCP", 00:21:19.735 "adrfam": "IPv4", 00:21:19.735 "traddr": "10.0.0.2", 00:21:19.735 "trsvcid": "4420" 00:21:19.735 }, 00:21:19.735 "peer_address": { 00:21:19.735 "trtype": "TCP", 00:21:19.735 "adrfam": "IPv4", 00:21:19.735 "traddr": "10.0.0.1", 00:21:19.735 "trsvcid": "40506" 00:21:19.735 }, 00:21:19.735 "auth": { 00:21:19.735 "state": "completed", 00:21:19.735 "digest": "sha256", 00:21:19.735 "dhgroup": "ffdhe4096" 00:21:19.735 } 00:21:19.735 } 00:21:19.735 ]' 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.735 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.996 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:19.996 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.567 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.827 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.088 00:21:21.088 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.088 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.088 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.348 { 00:21:21.348 "cntlid": 29, 00:21:21.348 "qid": 0, 00:21:21.348 "state": "enabled", 00:21:21.348 "thread": "nvmf_tgt_poll_group_000", 00:21:21.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.348 "listen_address": { 00:21:21.348 "trtype": "TCP", 00:21:21.348 "adrfam": "IPv4", 00:21:21.348 "traddr": "10.0.0.2", 00:21:21.348 "trsvcid": "4420" 00:21:21.348 }, 00:21:21.348 "peer_address": { 00:21:21.348 "trtype": "TCP", 00:21:21.348 "adrfam": "IPv4", 00:21:21.348 "traddr": "10.0.0.1", 00:21:21.348 "trsvcid": "40538" 00:21:21.348 }, 00:21:21.348 "auth": { 00:21:21.348 "state": "completed", 00:21:21.348 "digest": "sha256", 00:21:21.348 "dhgroup": "ffdhe4096" 00:21:21.348 } 00:21:21.348 } 00:21:21.348 ]' 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.348 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.349 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.609 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:21.609 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:22.181 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.441 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.702 00:21:22.702 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.702 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.702 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.963 { 00:21:22.963 "cntlid": 31, 00:21:22.963 "qid": 0, 00:21:22.963 "state": "enabled", 00:21:22.963 "thread": "nvmf_tgt_poll_group_000", 00:21:22.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:22.963 "listen_address": { 00:21:22.963 "trtype": "TCP", 00:21:22.963 "adrfam": "IPv4", 00:21:22.963 "traddr": "10.0.0.2", 00:21:22.963 "trsvcid": "4420" 00:21:22.963 }, 00:21:22.963 "peer_address": { 00:21:22.963 "trtype": "TCP", 00:21:22.963 "adrfam": "IPv4", 00:21:22.963 "traddr": "10.0.0.1", 00:21:22.963 "trsvcid": "33982" 00:21:22.963 }, 00:21:22.963 "auth": { 00:21:22.963 "state": "completed", 00:21:22.963 "digest": "sha256", 00:21:22.963 "dhgroup": "ffdhe4096" 00:21:22.963 } 00:21:22.963 } 00:21:22.963 ]' 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.963 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.223 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:23.223 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:23.796 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.056 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.317 00:21:24.317 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.317 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.317 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.577 { 00:21:24.577 "cntlid": 33, 00:21:24.577 "qid": 0, 00:21:24.577 "state": "enabled", 00:21:24.577 "thread": "nvmf_tgt_poll_group_000", 00:21:24.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:24.577 "listen_address": { 00:21:24.577 "trtype": "TCP", 00:21:24.577 "adrfam": "IPv4", 00:21:24.577 "traddr": "10.0.0.2", 00:21:24.577 "trsvcid": "4420" 00:21:24.577 }, 00:21:24.577 "peer_address": { 00:21:24.577 "trtype": "TCP", 00:21:24.577 "adrfam": "IPv4", 00:21:24.577 "traddr": "10.0.0.1", 00:21:24.577 "trsvcid": "34014" 00:21:24.577 }, 00:21:24.577 "auth": { 00:21:24.577 "state": "completed", 00:21:24.577 "digest": "sha256", 00:21:24.577 "dhgroup": "ffdhe6144" 00:21:24.577 } 00:21:24.577 } 00:21:24.577 ]' 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.577 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.837 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:24.837 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.409 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.670 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.931 00:21:25.931 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.931 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.931 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.191 { 00:21:26.191 "cntlid": 35, 00:21:26.191 "qid": 0, 00:21:26.191 "state": "enabled", 00:21:26.191 "thread": "nvmf_tgt_poll_group_000", 00:21:26.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:26.191 "listen_address": { 00:21:26.191 "trtype": "TCP", 00:21:26.191 "adrfam": "IPv4", 00:21:26.191 "traddr": "10.0.0.2", 00:21:26.191 "trsvcid": "4420" 00:21:26.191 }, 00:21:26.191 "peer_address": { 00:21:26.191 "trtype": "TCP", 00:21:26.191 "adrfam": "IPv4", 00:21:26.191 "traddr": "10.0.0.1", 00:21:26.191 "trsvcid": "34042" 00:21:26.191 }, 00:21:26.191 "auth": { 00:21:26.191 "state": "completed", 00:21:26.191 "digest": "sha256", 00:21:26.191 "dhgroup": "ffdhe6144" 00:21:26.191 } 00:21:26.191 } 00:21:26.191 ]' 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.191 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:26.452 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.395 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.656 00:21:27.656 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.656 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.656 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.916 { 00:21:27.916 "cntlid": 37, 00:21:27.916 "qid": 0, 00:21:27.916 "state": "enabled", 00:21:27.916 "thread": "nvmf_tgt_poll_group_000", 00:21:27.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.916 "listen_address": { 00:21:27.916 "trtype": "TCP", 00:21:27.916 "adrfam": "IPv4", 00:21:27.916 "traddr": "10.0.0.2", 00:21:27.916 "trsvcid": "4420" 00:21:27.916 }, 00:21:27.916 "peer_address": { 00:21:27.916 "trtype": "TCP", 00:21:27.916 "adrfam": "IPv4", 00:21:27.916 "traddr": "10.0.0.1", 00:21:27.916 "trsvcid": "34070" 00:21:27.916 }, 00:21:27.916 "auth": { 00:21:27.916 "state": "completed", 00:21:27.916 "digest": "sha256", 00:21:27.916 "dhgroup": "ffdhe6144" 00:21:27.916 } 00:21:27.916 } 00:21:27.916 ]' 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.916 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.176 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.176 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.176 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.176 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.176 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.437 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:28.437 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:29.008 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.270 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.531 00:21:29.531 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.531 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.531 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.791 { 00:21:29.791 "cntlid": 39, 00:21:29.791 "qid": 0, 00:21:29.791 "state": "enabled", 00:21:29.791 "thread": "nvmf_tgt_poll_group_000", 00:21:29.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:29.791 "listen_address": { 00:21:29.791 "trtype": "TCP", 00:21:29.791 "adrfam": "IPv4", 00:21:29.791 "traddr": "10.0.0.2", 00:21:29.791 "trsvcid": "4420" 00:21:29.791 }, 00:21:29.791 "peer_address": { 00:21:29.791 "trtype": "TCP", 00:21:29.791 "adrfam": "IPv4", 00:21:29.791 "traddr": "10.0.0.1", 00:21:29.791 "trsvcid": "34116" 00:21:29.791 }, 00:21:29.791 "auth": { 00:21:29.791 "state": "completed", 00:21:29.791 "digest": "sha256", 00:21:29.791 "dhgroup": "ffdhe6144" 00:21:29.791 } 00:21:29.791 } 00:21:29.791 ]' 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.791 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.051 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:30.051 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.623 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.883 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.454 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.454 { 00:21:31.454 "cntlid": 41, 00:21:31.454 "qid": 0, 00:21:31.454 "state": "enabled", 00:21:31.454 "thread": "nvmf_tgt_poll_group_000", 00:21:31.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.454 "listen_address": { 00:21:31.454 "trtype": "TCP", 00:21:31.454 "adrfam": "IPv4", 00:21:31.454 "traddr": "10.0.0.2", 00:21:31.454 "trsvcid": "4420" 00:21:31.454 }, 00:21:31.454 "peer_address": { 00:21:31.454 "trtype": "TCP", 00:21:31.454 "adrfam": "IPv4", 00:21:31.454 "traddr": "10.0.0.1", 00:21:31.454 "trsvcid": "34146" 00:21:31.454 }, 00:21:31.454 "auth": { 00:21:31.454 "state": "completed", 00:21:31.454 "digest": "sha256", 00:21:31.454 "dhgroup": "ffdhe8192" 00:21:31.454 } 00:21:31.454 } 00:21:31.454 ]' 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.454 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.717 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.717 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.717 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.717 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.717 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.978 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:31.978 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.551 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.813 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.074 00:21:33.074 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.074 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.074 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.335 { 00:21:33.335 "cntlid": 43, 00:21:33.335 "qid": 0, 00:21:33.335 "state": "enabled", 00:21:33.335 "thread": "nvmf_tgt_poll_group_000", 00:21:33.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.335 "listen_address": { 00:21:33.335 "trtype": "TCP", 00:21:33.335 "adrfam": "IPv4", 00:21:33.335 "traddr": "10.0.0.2", 00:21:33.335 "trsvcid": "4420" 00:21:33.335 }, 00:21:33.335 "peer_address": { 00:21:33.335 "trtype": "TCP", 00:21:33.335 "adrfam": "IPv4", 00:21:33.335 "traddr": "10.0.0.1", 00:21:33.335 "trsvcid": "36408" 00:21:33.335 }, 00:21:33.335 "auth": { 00:21:33.335 "state": "completed", 00:21:33.335 "digest": "sha256", 00:21:33.335 "dhgroup": "ffdhe8192" 00:21:33.335 } 00:21:33.335 } 00:21:33.335 ]' 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.335 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.597 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.597 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.597 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.597 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:33.597 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.542 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.114 00:21:35.114 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.114 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.114 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.114 { 00:21:35.114 "cntlid": 45, 00:21:35.114 "qid": 0, 00:21:35.114 "state": "enabled", 00:21:35.114 "thread": "nvmf_tgt_poll_group_000", 00:21:35.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.114 "listen_address": { 00:21:35.114 "trtype": "TCP", 00:21:35.114 "adrfam": "IPv4", 00:21:35.114 "traddr": "10.0.0.2", 00:21:35.114 "trsvcid": "4420" 00:21:35.114 }, 00:21:35.114 "peer_address": { 00:21:35.114 "trtype": "TCP", 00:21:35.114 "adrfam": "IPv4", 00:21:35.114 "traddr": "10.0.0.1", 00:21:35.114 "trsvcid": "36438" 00:21:35.114 }, 00:21:35.114 "auth": { 00:21:35.114 "state": "completed", 00:21:35.114 "digest": "sha256", 00:21:35.114 "dhgroup": "ffdhe8192" 00:21:35.114 } 00:21:35.114 } 00:21:35.114 ]' 00:21:35.114 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.375 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.635 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:35.636 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.211 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.474 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.045 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.045 { 00:21:37.045 "cntlid": 47, 00:21:37.045 "qid": 0, 00:21:37.045 "state": "enabled", 00:21:37.045 "thread": "nvmf_tgt_poll_group_000", 00:21:37.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.045 "listen_address": { 00:21:37.045 "trtype": "TCP", 00:21:37.045 "adrfam": "IPv4", 00:21:37.045 "traddr": "10.0.0.2", 00:21:37.045 "trsvcid": "4420" 00:21:37.045 }, 00:21:37.045 "peer_address": { 00:21:37.045 "trtype": "TCP", 00:21:37.045 "adrfam": "IPv4", 00:21:37.045 "traddr": "10.0.0.1", 00:21:37.045 "trsvcid": "36462" 00:21:37.045 }, 00:21:37.045 "auth": { 00:21:37.045 "state": "completed", 00:21:37.045 "digest": "sha256", 00:21:37.045 "dhgroup": "ffdhe8192" 00:21:37.045 } 00:21:37.045 } 00:21:37.045 ]' 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.045 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.307 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:37.308 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.254 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.254 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.514 00:21:38.514 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.514 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.514 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.776 { 00:21:38.776 "cntlid": 49, 00:21:38.776 "qid": 0, 00:21:38.776 "state": "enabled", 00:21:38.776 "thread": "nvmf_tgt_poll_group_000", 00:21:38.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.776 "listen_address": { 00:21:38.776 "trtype": "TCP", 00:21:38.776 "adrfam": "IPv4", 00:21:38.776 "traddr": "10.0.0.2", 00:21:38.776 "trsvcid": "4420" 00:21:38.776 }, 00:21:38.776 "peer_address": { 00:21:38.776 "trtype": "TCP", 00:21:38.776 "adrfam": "IPv4", 00:21:38.776 "traddr": "10.0.0.1", 00:21:38.776 "trsvcid": "36488" 00:21:38.776 }, 00:21:38.776 "auth": { 00:21:38.776 "state": "completed", 00:21:38.776 "digest": "sha384", 00:21:38.776 "dhgroup": "null" 00:21:38.776 } 00:21:38.776 } 00:21:38.776 ]' 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.776 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.037 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:39.037 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.610 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.871 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.133 00:21:40.133 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.133 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.133 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.394 { 00:21:40.394 "cntlid": 51, 00:21:40.394 "qid": 0, 00:21:40.394 "state": "enabled", 00:21:40.394 "thread": "nvmf_tgt_poll_group_000", 00:21:40.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.394 "listen_address": { 00:21:40.394 "trtype": "TCP", 00:21:40.394 "adrfam": "IPv4", 00:21:40.394 "traddr": "10.0.0.2", 00:21:40.394 "trsvcid": "4420" 00:21:40.394 }, 00:21:40.394 "peer_address": { 00:21:40.394 "trtype": "TCP", 00:21:40.394 "adrfam": "IPv4", 00:21:40.394 "traddr": "10.0.0.1", 00:21:40.394 "trsvcid": "36516" 00:21:40.394 }, 00:21:40.394 "auth": { 00:21:40.394 "state": "completed", 00:21:40.394 "digest": "sha384", 00:21:40.394 "dhgroup": "null" 00:21:40.394 } 00:21:40.394 } 00:21:40.394 ]' 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.394 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.655 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:40.655 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:41.227 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.228 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.489 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.750 00:21:41.750 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.750 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.750 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.011 { 00:21:42.011 "cntlid": 53, 00:21:42.011 "qid": 0, 00:21:42.011 "state": "enabled", 00:21:42.011 "thread": "nvmf_tgt_poll_group_000", 00:21:42.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.011 "listen_address": { 00:21:42.011 "trtype": "TCP", 00:21:42.011 "adrfam": "IPv4", 00:21:42.011 "traddr": "10.0.0.2", 00:21:42.011 "trsvcid": "4420" 00:21:42.011 }, 00:21:42.011 "peer_address": { 00:21:42.011 "trtype": "TCP", 00:21:42.011 "adrfam": "IPv4", 00:21:42.011 "traddr": "10.0.0.1", 00:21:42.011 "trsvcid": "36550" 00:21:42.011 }, 00:21:42.011 "auth": { 00:21:42.011 "state": "completed", 00:21:42.011 "digest": "sha384", 00:21:42.011 "dhgroup": "null" 00:21:42.011 } 00:21:42.011 } 00:21:42.011 ]' 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.011 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.272 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:42.272 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:42.843 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.105 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.366 00:21:43.366 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.366 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.366 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.627 { 00:21:43.627 "cntlid": 55, 00:21:43.627 "qid": 0, 00:21:43.627 "state": "enabled", 00:21:43.627 "thread": "nvmf_tgt_poll_group_000", 00:21:43.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:43.627 "listen_address": { 00:21:43.627 "trtype": "TCP", 00:21:43.627 "adrfam": "IPv4", 00:21:43.627 "traddr": "10.0.0.2", 00:21:43.627 "trsvcid": "4420" 00:21:43.627 }, 00:21:43.627 "peer_address": { 00:21:43.627 "trtype": "TCP", 00:21:43.627 "adrfam": "IPv4", 00:21:43.627 "traddr": "10.0.0.1", 00:21:43.627 "trsvcid": "38260" 00:21:43.627 }, 00:21:43.627 "auth": { 00:21:43.627 "state": "completed", 00:21:43.627 "digest": "sha384", 00:21:43.627 "dhgroup": "null" 00:21:43.627 } 00:21:43.627 } 00:21:43.627 ]' 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.627 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.888 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:43.888 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.458 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.720 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.982 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.982 { 00:21:44.982 "cntlid": 57, 00:21:44.982 "qid": 0, 00:21:44.982 "state": "enabled", 00:21:44.982 "thread": "nvmf_tgt_poll_group_000", 00:21:44.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:44.982 "listen_address": { 00:21:44.982 "trtype": "TCP", 00:21:44.982 "adrfam": "IPv4", 00:21:44.982 "traddr": "10.0.0.2", 00:21:44.982 "trsvcid": "4420" 00:21:44.982 }, 00:21:44.982 "peer_address": { 00:21:44.982 "trtype": "TCP", 00:21:44.982 "adrfam": "IPv4", 00:21:44.982 "traddr": "10.0.0.1", 00:21:44.982 "trsvcid": "38300" 00:21:44.982 }, 00:21:44.982 "auth": { 00:21:44.982 "state": "completed", 00:21:44.982 "digest": "sha384", 00:21:44.982 "dhgroup": "ffdhe2048" 00:21:44.982 } 00:21:44.982 } 00:21:44.982 ]' 00:21:44.982 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.243 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.503 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:45.504 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.074 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.336 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.597 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.597 { 00:21:46.597 "cntlid": 59, 00:21:46.597 "qid": 0, 00:21:46.597 "state": "enabled", 00:21:46.597 "thread": "nvmf_tgt_poll_group_000", 00:21:46.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.597 "listen_address": { 00:21:46.597 "trtype": "TCP", 00:21:46.597 "adrfam": "IPv4", 00:21:46.597 "traddr": "10.0.0.2", 00:21:46.597 "trsvcid": "4420" 00:21:46.597 }, 00:21:46.597 "peer_address": { 00:21:46.597 "trtype": "TCP", 00:21:46.597 "adrfam": "IPv4", 00:21:46.597 "traddr": "10.0.0.1", 00:21:46.597 "trsvcid": "38332" 00:21:46.597 }, 00:21:46.597 "auth": { 00:21:46.597 "state": "completed", 00:21:46.597 "digest": "sha384", 00:21:46.597 "dhgroup": "ffdhe2048" 00:21:46.597 } 00:21:46.597 } 00:21:46.597 ]' 00:21:46.597 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.858 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.119 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:47.119 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.690 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.950 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.950 00:21:48.211 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.211 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.211 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.211 { 00:21:48.211 "cntlid": 61, 00:21:48.211 "qid": 0, 00:21:48.211 "state": "enabled", 00:21:48.211 "thread": "nvmf_tgt_poll_group_000", 00:21:48.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:48.211 "listen_address": { 00:21:48.211 "trtype": "TCP", 00:21:48.211 "adrfam": "IPv4", 00:21:48.211 "traddr": "10.0.0.2", 00:21:48.211 "trsvcid": "4420" 00:21:48.211 }, 00:21:48.211 "peer_address": { 00:21:48.211 "trtype": "TCP", 00:21:48.211 "adrfam": "IPv4", 00:21:48.211 "traddr": "10.0.0.1", 00:21:48.211 "trsvcid": "38354" 00:21:48.211 }, 00:21:48.211 "auth": { 00:21:48.211 "state": "completed", 00:21:48.211 "digest": "sha384", 00:21:48.211 "dhgroup": "ffdhe2048" 00:21:48.211 } 00:21:48.211 } 00:21:48.211 ]' 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.211 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.471 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.471 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.471 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.471 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.471 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.731 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:48.731 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:49.303 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.565 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.826 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.826 { 00:21:49.826 "cntlid": 63, 00:21:49.826 "qid": 0, 00:21:49.826 "state": "enabled", 00:21:49.826 "thread": "nvmf_tgt_poll_group_000", 00:21:49.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.826 "listen_address": { 00:21:49.826 "trtype": "TCP", 00:21:49.826 "adrfam": "IPv4", 00:21:49.826 "traddr": "10.0.0.2", 00:21:49.826 "trsvcid": "4420" 00:21:49.826 }, 00:21:49.826 "peer_address": { 00:21:49.826 "trtype": "TCP", 00:21:49.826 "adrfam": "IPv4", 00:21:49.826 "traddr": "10.0.0.1", 00:21:49.826 "trsvcid": "38366" 00:21:49.826 }, 00:21:49.826 "auth": { 00:21:49.826 "state": "completed", 00:21:49.826 "digest": "sha384", 00:21:49.826 "dhgroup": "ffdhe2048" 00:21:49.826 } 00:21:49.826 } 00:21:49.826 ]' 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.826 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.086 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.086 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.086 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.086 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.086 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.347 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:50.347 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.918 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.179 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.440 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.440 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.440 { 00:21:51.440 "cntlid": 65, 00:21:51.440 "qid": 0, 00:21:51.440 "state": "enabled", 00:21:51.440 "thread": "nvmf_tgt_poll_group_000", 00:21:51.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:51.440 "listen_address": { 00:21:51.440 "trtype": "TCP", 00:21:51.440 "adrfam": "IPv4", 00:21:51.440 "traddr": "10.0.0.2", 00:21:51.440 "trsvcid": "4420" 00:21:51.440 }, 00:21:51.440 "peer_address": { 00:21:51.440 "trtype": "TCP", 00:21:51.440 "adrfam": "IPv4", 00:21:51.440 "traddr": "10.0.0.1", 00:21:51.440 "trsvcid": "38394" 00:21:51.440 }, 00:21:51.440 "auth": { 00:21:51.440 "state": "completed", 00:21:51.440 "digest": "sha384", 00:21:51.440 "dhgroup": "ffdhe3072" 00:21:51.440 } 00:21:51.440 } 00:21:51.441 ]' 00:21:51.441 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.441 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.441 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.702 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.702 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.702 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.702 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.702 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.963 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:51.963 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.535 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.795 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.796 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.057 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.057 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.057 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.057 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.057 { 00:21:53.057 "cntlid": 67, 00:21:53.057 "qid": 0, 00:21:53.057 "state": "enabled", 00:21:53.057 "thread": "nvmf_tgt_poll_group_000", 00:21:53.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:53.057 "listen_address": { 00:21:53.057 "trtype": "TCP", 00:21:53.057 "adrfam": "IPv4", 00:21:53.057 "traddr": "10.0.0.2", 00:21:53.057 "trsvcid": "4420" 00:21:53.057 }, 00:21:53.057 "peer_address": { 00:21:53.057 "trtype": "TCP", 00:21:53.057 "adrfam": "IPv4", 00:21:53.057 "traddr": "10.0.0.1", 00:21:53.057 "trsvcid": "56832" 00:21:53.057 }, 00:21:53.057 "auth": { 00:21:53.057 "state": "completed", 00:21:53.057 "digest": "sha384", 00:21:53.057 "dhgroup": "ffdhe3072" 00:21:53.057 } 00:21:53.057 } 00:21:53.057 ]' 00:21:53.057 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.319 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.579 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:53.580 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:21:54.239 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.239 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.239 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.239 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.239 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.595 00:21:54.595 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.595 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.595 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.858 { 00:21:54.858 "cntlid": 69, 00:21:54.858 "qid": 0, 00:21:54.858 "state": "enabled", 00:21:54.858 "thread": "nvmf_tgt_poll_group_000", 00:21:54.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.858 "listen_address": { 00:21:54.858 "trtype": "TCP", 00:21:54.858 "adrfam": "IPv4", 00:21:54.858 "traddr": "10.0.0.2", 00:21:54.858 "trsvcid": "4420" 00:21:54.858 }, 00:21:54.858 "peer_address": { 00:21:54.858 "trtype": "TCP", 00:21:54.858 "adrfam": "IPv4", 00:21:54.858 "traddr": "10.0.0.1", 00:21:54.858 "trsvcid": "56854" 00:21:54.858 }, 00:21:54.858 "auth": { 00:21:54.858 "state": "completed", 00:21:54.858 "digest": "sha384", 00:21:54.858 "dhgroup": "ffdhe3072" 00:21:54.858 } 00:21:54.858 } 00:21:54.858 ]' 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.858 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.123 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.123 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.123 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.123 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:55.123 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.081 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.351 00:21:56.351 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.351 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.351 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.623 { 00:21:56.623 "cntlid": 71, 00:21:56.623 "qid": 0, 00:21:56.623 "state": "enabled", 00:21:56.623 "thread": "nvmf_tgt_poll_group_000", 00:21:56.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.623 "listen_address": { 00:21:56.623 "trtype": "TCP", 00:21:56.623 "adrfam": "IPv4", 00:21:56.623 "traddr": "10.0.0.2", 00:21:56.623 "trsvcid": "4420" 00:21:56.623 }, 00:21:56.623 "peer_address": { 00:21:56.623 "trtype": "TCP", 00:21:56.623 "adrfam": "IPv4", 00:21:56.623 "traddr": "10.0.0.1", 00:21:56.623 "trsvcid": "56892" 00:21:56.623 }, 00:21:56.623 "auth": { 00:21:56.623 "state": "completed", 00:21:56.623 "digest": "sha384", 00:21:56.623 "dhgroup": "ffdhe3072" 00:21:56.623 } 00:21:56.623 } 00:21:56.623 ]' 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.623 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.906 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:56.906 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.521 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.793 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.793 00:21:58.075 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.075 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.075 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.076 { 00:21:58.076 "cntlid": 73, 00:21:58.076 "qid": 0, 00:21:58.076 "state": "enabled", 00:21:58.076 "thread": "nvmf_tgt_poll_group_000", 00:21:58.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:58.076 "listen_address": { 00:21:58.076 "trtype": "TCP", 00:21:58.076 "adrfam": "IPv4", 00:21:58.076 "traddr": "10.0.0.2", 00:21:58.076 "trsvcid": "4420" 00:21:58.076 }, 00:21:58.076 "peer_address": { 00:21:58.076 "trtype": "TCP", 00:21:58.076 "adrfam": "IPv4", 00:21:58.076 "traddr": "10.0.0.1", 00:21:58.076 "trsvcid": "56914" 00:21:58.076 }, 00:21:58.076 "auth": { 00:21:58.076 "state": "completed", 00:21:58.076 "digest": "sha384", 00:21:58.076 "dhgroup": "ffdhe4096" 00:21:58.076 } 00:21:58.076 } 00:21:58.076 ]' 00:21:58.076 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.076 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.076 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:58.354 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.318 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.318 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.584 00:21:59.584 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.584 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.584 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.853 { 00:21:59.853 "cntlid": 75, 00:21:59.853 "qid": 0, 00:21:59.853 "state": "enabled", 00:21:59.853 "thread": "nvmf_tgt_poll_group_000", 00:21:59.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.853 "listen_address": { 00:21:59.853 "trtype": "TCP", 00:21:59.853 "adrfam": "IPv4", 00:21:59.853 "traddr": "10.0.0.2", 00:21:59.853 "trsvcid": "4420" 00:21:59.853 }, 00:21:59.853 "peer_address": { 00:21:59.853 "trtype": "TCP", 00:21:59.853 "adrfam": "IPv4", 00:21:59.853 "traddr": "10.0.0.1", 00:21:59.853 "trsvcid": "56962" 00:21:59.853 }, 00:21:59.853 "auth": { 00:21:59.853 "state": "completed", 00:21:59.853 "digest": "sha384", 00:21:59.853 "dhgroup": "ffdhe4096" 00:21:59.853 } 00:21:59.853 } 00:21:59.853 ]' 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.853 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.121 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:00.122 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.729 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.013 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.285 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.285 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.559 { 00:22:01.559 "cntlid": 77, 00:22:01.559 "qid": 0, 00:22:01.559 "state": "enabled", 00:22:01.559 "thread": "nvmf_tgt_poll_group_000", 00:22:01.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:01.559 "listen_address": { 00:22:01.559 "trtype": "TCP", 00:22:01.559 "adrfam": "IPv4", 00:22:01.559 "traddr": "10.0.0.2", 00:22:01.559 "trsvcid": "4420" 00:22:01.559 }, 00:22:01.559 "peer_address": { 00:22:01.559 "trtype": "TCP", 00:22:01.559 "adrfam": "IPv4", 00:22:01.559 "traddr": "10.0.0.1", 00:22:01.559 "trsvcid": "56986" 00:22:01.559 }, 00:22:01.559 "auth": { 00:22:01.559 "state": "completed", 00:22:01.559 "digest": "sha384", 00:22:01.559 "dhgroup": "ffdhe4096" 00:22:01.559 } 00:22:01.559 } 00:22:01.559 ]' 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.559 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.833 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:01.833 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.438 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.711 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.981 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.981 { 00:22:02.981 "cntlid": 79, 00:22:02.981 "qid": 0, 00:22:02.981 "state": "enabled", 00:22:02.981 "thread": "nvmf_tgt_poll_group_000", 00:22:02.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.981 "listen_address": { 00:22:02.981 "trtype": "TCP", 00:22:02.981 "adrfam": "IPv4", 00:22:02.981 "traddr": "10.0.0.2", 00:22:02.981 "trsvcid": "4420" 00:22:02.981 }, 00:22:02.981 "peer_address": { 00:22:02.981 "trtype": "TCP", 00:22:02.981 "adrfam": "IPv4", 00:22:02.981 "traddr": "10.0.0.1", 00:22:02.981 "trsvcid": "44516" 00:22:02.981 }, 00:22:02.981 "auth": { 00:22:02.981 "state": "completed", 00:22:02.981 "digest": "sha384", 00:22:02.981 "dhgroup": "ffdhe4096" 00:22:02.981 } 00:22:02.981 } 00:22:02.981 ]' 00:22:02.981 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.253 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.524 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:03.524 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.112 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.388 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.682 00:22:04.682 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.682 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.682 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.973 { 00:22:04.973 "cntlid": 81, 00:22:04.973 "qid": 0, 00:22:04.973 "state": "enabled", 00:22:04.973 "thread": "nvmf_tgt_poll_group_000", 00:22:04.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.973 "listen_address": { 00:22:04.973 "trtype": "TCP", 00:22:04.973 "adrfam": "IPv4", 00:22:04.973 "traddr": "10.0.0.2", 00:22:04.973 "trsvcid": "4420" 00:22:04.973 }, 00:22:04.973 "peer_address": { 00:22:04.973 "trtype": "TCP", 00:22:04.973 "adrfam": "IPv4", 00:22:04.973 "traddr": "10.0.0.1", 00:22:04.973 "trsvcid": "44530" 00:22:04.973 }, 00:22:04.973 "auth": { 00:22:04.973 "state": "completed", 00:22:04.973 "digest": "sha384", 00:22:04.973 "dhgroup": "ffdhe6144" 00:22:04.973 } 00:22:04.973 } 00:22:04.973 ]' 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.973 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.246 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:05.246 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:05.860 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.131 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.402 00:22:06.402 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.402 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.402 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.671 { 00:22:06.671 "cntlid": 83, 00:22:06.671 "qid": 0, 00:22:06.671 "state": "enabled", 00:22:06.671 "thread": "nvmf_tgt_poll_group_000", 00:22:06.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.671 "listen_address": { 00:22:06.671 "trtype": "TCP", 00:22:06.671 "adrfam": "IPv4", 00:22:06.671 "traddr": "10.0.0.2", 00:22:06.671 "trsvcid": "4420" 00:22:06.671 }, 00:22:06.671 "peer_address": { 00:22:06.671 "trtype": "TCP", 00:22:06.671 "adrfam": "IPv4", 00:22:06.671 "traddr": "10.0.0.1", 00:22:06.671 "trsvcid": "44546" 00:22:06.671 }, 00:22:06.671 "auth": { 00:22:06.671 "state": "completed", 00:22:06.671 "digest": "sha384", 00:22:06.671 "dhgroup": "ffdhe6144" 00:22:06.671 } 00:22:06.671 } 00:22:06.671 ]' 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.671 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.941 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:06.941 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:07.545 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.545 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.545 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.546 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.546 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.546 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.546 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.546 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.838 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.139 00:22:08.139 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.139 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.139 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.139 { 00:22:08.139 "cntlid": 85, 00:22:08.139 "qid": 0, 00:22:08.139 "state": "enabled", 00:22:08.139 "thread": "nvmf_tgt_poll_group_000", 00:22:08.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:08.139 "listen_address": { 00:22:08.139 "trtype": "TCP", 00:22:08.139 "adrfam": "IPv4", 00:22:08.139 "traddr": "10.0.0.2", 00:22:08.139 "trsvcid": "4420" 00:22:08.139 }, 00:22:08.139 "peer_address": { 00:22:08.139 "trtype": "TCP", 00:22:08.139 "adrfam": "IPv4", 00:22:08.139 "traddr": "10.0.0.1", 00:22:08.139 "trsvcid": "44588" 00:22:08.139 }, 00:22:08.139 "auth": { 00:22:08.139 "state": "completed", 00:22:08.139 "digest": "sha384", 00:22:08.139 "dhgroup": "ffdhe6144" 00:22:08.139 } 00:22:08.139 } 00:22:08.139 ]' 00:22:08.139 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.430 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.728 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:08.728 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.318 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.912 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.912 { 00:22:09.912 "cntlid": 87, 00:22:09.912 "qid": 0, 00:22:09.912 "state": "enabled", 00:22:09.912 "thread": "nvmf_tgt_poll_group_000", 00:22:09.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.912 "listen_address": { 00:22:09.912 "trtype": "TCP", 00:22:09.912 "adrfam": "IPv4", 00:22:09.912 "traddr": "10.0.0.2", 00:22:09.912 "trsvcid": "4420" 00:22:09.912 }, 00:22:09.912 "peer_address": { 00:22:09.912 "trtype": "TCP", 00:22:09.912 "adrfam": "IPv4", 00:22:09.912 "traddr": "10.0.0.1", 00:22:09.912 "trsvcid": "44604" 00:22:09.912 }, 00:22:09.912 "auth": { 00:22:09.912 "state": "completed", 00:22:09.912 "digest": "sha384", 00:22:09.912 "dhgroup": "ffdhe6144" 00:22:09.912 } 00:22:09.912 } 00:22:09.912 ]' 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.912 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.199 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.199 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.199 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.199 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:10.200 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:10.808 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.808 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.808 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.808 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.083 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.701 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.701 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.968 { 00:22:11.968 "cntlid": 89, 00:22:11.968 "qid": 0, 00:22:11.968 "state": "enabled", 00:22:11.968 "thread": "nvmf_tgt_poll_group_000", 00:22:11.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.968 "listen_address": { 00:22:11.968 "trtype": "TCP", 00:22:11.968 "adrfam": "IPv4", 00:22:11.968 "traddr": "10.0.0.2", 00:22:11.968 "trsvcid": "4420" 00:22:11.968 }, 00:22:11.968 "peer_address": { 00:22:11.968 "trtype": "TCP", 00:22:11.968 "adrfam": "IPv4", 00:22:11.968 "traddr": "10.0.0.1", 00:22:11.968 "trsvcid": "44634" 00:22:11.968 }, 00:22:11.968 "auth": { 00:22:11.968 "state": "completed", 00:22:11.968 "digest": "sha384", 00:22:11.968 "dhgroup": "ffdhe8192" 00:22:11.968 } 00:22:11.968 } 00:22:11.968 ]' 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.968 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.233 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:12.233 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:12.822 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.094 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.367 00:22:13.367 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.367 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.367 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.645 { 00:22:13.645 "cntlid": 91, 00:22:13.645 "qid": 0, 00:22:13.645 "state": "enabled", 00:22:13.645 "thread": "nvmf_tgt_poll_group_000", 00:22:13.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.645 "listen_address": { 00:22:13.645 "trtype": "TCP", 00:22:13.645 "adrfam": "IPv4", 00:22:13.645 "traddr": "10.0.0.2", 00:22:13.645 "trsvcid": "4420" 00:22:13.645 }, 00:22:13.645 "peer_address": { 00:22:13.645 "trtype": "TCP", 00:22:13.645 "adrfam": "IPv4", 00:22:13.645 "traddr": "10.0.0.1", 00:22:13.645 "trsvcid": "43700" 00:22:13.645 }, 00:22:13.645 "auth": { 00:22:13.645 "state": "completed", 00:22:13.645 "digest": "sha384", 00:22:13.645 "dhgroup": "ffdhe8192" 00:22:13.645 } 00:22:13.645 } 00:22:13.645 ]' 00:22:13.645 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.646 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.646 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:13.934 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:14.542 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.824 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.824 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.824 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.824 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.824 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.825 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.409 00:22:15.409 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.409 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.409 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.699 { 00:22:15.699 "cntlid": 93, 00:22:15.699 "qid": 0, 00:22:15.699 "state": "enabled", 00:22:15.699 "thread": "nvmf_tgt_poll_group_000", 00:22:15.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:15.699 "listen_address": { 00:22:15.699 "trtype": "TCP", 00:22:15.699 "adrfam": "IPv4", 00:22:15.699 "traddr": "10.0.0.2", 00:22:15.699 "trsvcid": "4420" 00:22:15.699 }, 00:22:15.699 "peer_address": { 00:22:15.699 "trtype": "TCP", 00:22:15.699 "adrfam": "IPv4", 00:22:15.699 "traddr": "10.0.0.1", 00:22:15.699 "trsvcid": "43730" 00:22:15.699 }, 00:22:15.699 "auth": { 00:22:15.699 "state": "completed", 00:22:15.699 "digest": "sha384", 00:22:15.699 "dhgroup": "ffdhe8192" 00:22:15.699 } 00:22:15.699 } 00:22:15.699 ]' 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.699 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.982 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:15.982 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.623 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.624 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.228 00:22:17.228 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.228 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.228 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.509 { 00:22:17.509 "cntlid": 95, 00:22:17.509 "qid": 0, 00:22:17.509 "state": "enabled", 00:22:17.509 "thread": "nvmf_tgt_poll_group_000", 00:22:17.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:17.509 "listen_address": { 00:22:17.509 "trtype": "TCP", 00:22:17.509 "adrfam": "IPv4", 00:22:17.509 "traddr": "10.0.0.2", 00:22:17.509 "trsvcid": "4420" 00:22:17.509 }, 00:22:17.509 "peer_address": { 00:22:17.509 "trtype": "TCP", 00:22:17.509 "adrfam": "IPv4", 00:22:17.509 "traddr": "10.0.0.1", 00:22:17.509 "trsvcid": "43766" 00:22:17.509 }, 00:22:17.509 "auth": { 00:22:17.509 "state": "completed", 00:22:17.509 "digest": "sha384", 00:22:17.509 "dhgroup": "ffdhe8192" 00:22:17.509 } 00:22:17.509 } 00:22:17.509 ]' 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.509 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.799 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:17.799 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:18.400 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.661 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.661 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.922 { 00:22:18.922 "cntlid": 97, 00:22:18.922 "qid": 0, 00:22:18.922 "state": "enabled", 00:22:18.922 "thread": "nvmf_tgt_poll_group_000", 00:22:18.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.922 "listen_address": { 00:22:18.922 "trtype": "TCP", 00:22:18.922 "adrfam": "IPv4", 00:22:18.922 "traddr": "10.0.0.2", 00:22:18.922 "trsvcid": "4420" 00:22:18.922 }, 00:22:18.922 "peer_address": { 00:22:18.922 "trtype": "TCP", 00:22:18.922 "adrfam": "IPv4", 00:22:18.922 "traddr": "10.0.0.1", 00:22:18.922 "trsvcid": "43788" 00:22:18.922 }, 00:22:18.922 "auth": { 00:22:18.922 "state": "completed", 00:22:18.922 "digest": "sha512", 00:22:18.922 "dhgroup": "null" 00:22:18.922 } 00:22:18.922 } 00:22:18.922 ]' 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.922 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.923 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.184 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:19.184 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.184 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.184 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.184 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.184 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:19.184 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:20.127 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.127 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.389 00:22:20.389 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.389 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.389 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.650 { 00:22:20.650 "cntlid": 99, 00:22:20.650 "qid": 0, 00:22:20.650 "state": "enabled", 00:22:20.650 "thread": "nvmf_tgt_poll_group_000", 00:22:20.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.650 "listen_address": { 00:22:20.650 "trtype": "TCP", 00:22:20.650 "adrfam": "IPv4", 00:22:20.650 "traddr": "10.0.0.2", 00:22:20.650 "trsvcid": "4420" 00:22:20.650 }, 00:22:20.650 "peer_address": { 00:22:20.650 "trtype": "TCP", 00:22:20.650 "adrfam": "IPv4", 00:22:20.650 "traddr": "10.0.0.1", 00:22:20.650 "trsvcid": "43810" 00:22:20.650 }, 00:22:20.650 "auth": { 00:22:20.650 "state": "completed", 00:22:20.650 "digest": "sha512", 00:22:20.650 "dhgroup": "null" 00:22:20.650 } 00:22:20.650 } 00:22:20.650 ]' 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.650 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.912 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:20.912 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:21.483 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.744 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.006 00:22:22.006 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.006 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.006 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.267 { 00:22:22.267 "cntlid": 101, 00:22:22.267 "qid": 0, 00:22:22.267 "state": "enabled", 00:22:22.267 "thread": "nvmf_tgt_poll_group_000", 00:22:22.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.267 "listen_address": { 00:22:22.267 "trtype": "TCP", 00:22:22.267 "adrfam": "IPv4", 00:22:22.267 "traddr": "10.0.0.2", 00:22:22.267 "trsvcid": "4420" 00:22:22.267 }, 00:22:22.267 "peer_address": { 00:22:22.267 "trtype": "TCP", 00:22:22.267 "adrfam": "IPv4", 00:22:22.267 "traddr": "10.0.0.1", 00:22:22.267 "trsvcid": "43842" 00:22:22.267 }, 00:22:22.267 "auth": { 00:22:22.267 "state": "completed", 00:22:22.267 "digest": "sha512", 00:22:22.267 "dhgroup": "null" 00:22:22.267 } 00:22:22.267 } 00:22:22.267 ]' 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.267 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.528 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:22.528 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:23.102 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:23.102 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.363 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.624 00:22:23.624 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.624 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.624 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.885 { 00:22:23.885 "cntlid": 103, 00:22:23.885 "qid": 0, 00:22:23.885 "state": "enabled", 00:22:23.885 "thread": "nvmf_tgt_poll_group_000", 00:22:23.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:23.885 "listen_address": { 00:22:23.885 "trtype": "TCP", 00:22:23.885 "adrfam": "IPv4", 00:22:23.885 "traddr": "10.0.0.2", 00:22:23.885 "trsvcid": "4420" 00:22:23.885 }, 00:22:23.885 "peer_address": { 00:22:23.885 "trtype": "TCP", 00:22:23.885 "adrfam": "IPv4", 00:22:23.885 "traddr": "10.0.0.1", 00:22:23.885 "trsvcid": "39844" 00:22:23.885 }, 00:22:23.885 "auth": { 00:22:23.885 "state": "completed", 00:22:23.885 "digest": "sha512", 00:22:23.885 "dhgroup": "null" 00:22:23.885 } 00:22:23.885 } 00:22:23.885 ]' 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.885 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.146 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:24.146 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.718 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.978 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.239 00:22:25.239 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.239 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.239 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.501 { 00:22:25.501 "cntlid": 105, 00:22:25.501 "qid": 0, 00:22:25.501 "state": "enabled", 00:22:25.501 "thread": "nvmf_tgt_poll_group_000", 00:22:25.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.501 "listen_address": { 00:22:25.501 "trtype": "TCP", 00:22:25.501 "adrfam": "IPv4", 00:22:25.501 "traddr": "10.0.0.2", 00:22:25.501 "trsvcid": "4420" 00:22:25.501 }, 00:22:25.501 "peer_address": { 00:22:25.501 "trtype": "TCP", 00:22:25.501 "adrfam": "IPv4", 00:22:25.501 "traddr": "10.0.0.1", 00:22:25.501 "trsvcid": "39874" 00:22:25.501 }, 00:22:25.501 "auth": { 00:22:25.501 "state": "completed", 00:22:25.501 "digest": "sha512", 00:22:25.501 "dhgroup": "ffdhe2048" 00:22:25.501 } 00:22:25.501 } 00:22:25.501 ]' 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.501 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.762 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:25.762 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.334 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.594 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.854 00:22:26.854 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.854 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.854 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.115 { 00:22:27.115 "cntlid": 107, 00:22:27.115 "qid": 0, 00:22:27.115 "state": "enabled", 00:22:27.115 "thread": "nvmf_tgt_poll_group_000", 00:22:27.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.115 "listen_address": { 00:22:27.115 "trtype": "TCP", 00:22:27.115 "adrfam": "IPv4", 00:22:27.115 "traddr": "10.0.0.2", 00:22:27.115 "trsvcid": "4420" 00:22:27.115 }, 00:22:27.115 "peer_address": { 00:22:27.115 "trtype": "TCP", 00:22:27.115 "adrfam": "IPv4", 00:22:27.115 "traddr": "10.0.0.1", 00:22:27.115 "trsvcid": "39906" 00:22:27.115 }, 00:22:27.115 "auth": { 00:22:27.115 "state": "completed", 00:22:27.115 "digest": "sha512", 00:22:27.115 "dhgroup": "ffdhe2048" 00:22:27.115 } 00:22:27.115 } 00:22:27.115 ]' 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.115 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.375 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:27.376 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.948 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.208 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.468 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.468 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.468 { 00:22:28.468 "cntlid": 109, 00:22:28.468 "qid": 0, 00:22:28.468 "state": "enabled", 00:22:28.468 "thread": "nvmf_tgt_poll_group_000", 00:22:28.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:28.468 "listen_address": { 00:22:28.468 "trtype": "TCP", 00:22:28.468 "adrfam": "IPv4", 00:22:28.469 "traddr": "10.0.0.2", 00:22:28.469 "trsvcid": "4420" 00:22:28.469 }, 00:22:28.469 "peer_address": { 00:22:28.469 "trtype": "TCP", 00:22:28.469 "adrfam": "IPv4", 00:22:28.469 "traddr": "10.0.0.1", 00:22:28.469 "trsvcid": "39922" 00:22:28.469 }, 00:22:28.469 "auth": { 00:22:28.469 "state": "completed", 00:22:28.469 "digest": "sha512", 00:22:28.469 "dhgroup": "ffdhe2048" 00:22:28.469 } 00:22:28.469 } 00:22:28.469 ]' 00:22:28.469 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.730 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.991 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:28.991 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.562 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.823 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.824 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.085 00:22:30.085 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.085 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.085 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.085 { 00:22:30.085 "cntlid": 111, 00:22:30.085 "qid": 0, 00:22:30.085 "state": "enabled", 00:22:30.085 "thread": "nvmf_tgt_poll_group_000", 00:22:30.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.085 "listen_address": { 00:22:30.085 "trtype": "TCP", 00:22:30.085 "adrfam": "IPv4", 00:22:30.085 "traddr": "10.0.0.2", 00:22:30.085 "trsvcid": "4420" 00:22:30.085 }, 00:22:30.085 "peer_address": { 00:22:30.085 "trtype": "TCP", 00:22:30.085 "adrfam": "IPv4", 00:22:30.085 "traddr": "10.0.0.1", 00:22:30.085 "trsvcid": "39936" 00:22:30.085 }, 00:22:30.085 "auth": { 00:22:30.085 "state": "completed", 00:22:30.085 "digest": "sha512", 00:22:30.085 "dhgroup": "ffdhe2048" 00:22:30.085 } 00:22:30.085 } 00:22:30.085 ]' 00:22:30.085 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.346 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.606 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:30.606 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:31.178 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.178 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.440 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.440 00:22:31.701 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.701 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.701 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.701 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.701 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.702 { 00:22:31.702 "cntlid": 113, 00:22:31.702 "qid": 0, 00:22:31.702 "state": "enabled", 00:22:31.702 "thread": "nvmf_tgt_poll_group_000", 00:22:31.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:31.702 "listen_address": { 00:22:31.702 "trtype": "TCP", 00:22:31.702 "adrfam": "IPv4", 00:22:31.702 "traddr": "10.0.0.2", 00:22:31.702 "trsvcid": "4420" 00:22:31.702 }, 00:22:31.702 "peer_address": { 00:22:31.702 "trtype": "TCP", 00:22:31.702 "adrfam": "IPv4", 00:22:31.702 "traddr": "10.0.0.1", 00:22:31.702 "trsvcid": "39958" 00:22:31.702 }, 00:22:31.702 "auth": { 00:22:31.702 "state": "completed", 00:22:31.702 "digest": "sha512", 00:22:31.702 "dhgroup": "ffdhe3072" 00:22:31.702 } 00:22:31.702 } 00:22:31.702 ]' 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.702 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.962 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.962 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.962 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.963 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.963 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.223 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:32.223 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:32.794 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.056 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.056 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.317 { 00:22:33.317 "cntlid": 115, 00:22:33.317 "qid": 0, 00:22:33.317 "state": "enabled", 00:22:33.317 "thread": "nvmf_tgt_poll_group_000", 00:22:33.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:33.317 "listen_address": { 00:22:33.317 "trtype": "TCP", 00:22:33.317 "adrfam": "IPv4", 00:22:33.317 "traddr": "10.0.0.2", 00:22:33.317 "trsvcid": "4420" 00:22:33.317 }, 00:22:33.317 "peer_address": { 00:22:33.317 "trtype": "TCP", 00:22:33.317 "adrfam": "IPv4", 00:22:33.317 "traddr": "10.0.0.1", 00:22:33.317 "trsvcid": "55384" 00:22:33.317 }, 00:22:33.317 "auth": { 00:22:33.317 "state": "completed", 00:22:33.317 "digest": "sha512", 00:22:33.317 "dhgroup": "ffdhe3072" 00:22:33.317 } 00:22:33.317 } 00:22:33.317 ]' 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.317 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.579 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.840 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:33.840 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.412 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.673 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.934 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.934 { 00:22:34.934 "cntlid": 117, 00:22:34.934 "qid": 0, 00:22:34.934 "state": "enabled", 00:22:34.934 "thread": "nvmf_tgt_poll_group_000", 00:22:34.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.934 "listen_address": { 00:22:34.934 "trtype": "TCP", 00:22:34.934 "adrfam": "IPv4", 00:22:34.934 "traddr": "10.0.0.2", 00:22:34.934 "trsvcid": "4420" 00:22:34.934 }, 00:22:34.934 "peer_address": { 00:22:34.934 "trtype": "TCP", 00:22:34.934 "adrfam": "IPv4", 00:22:34.934 "traddr": "10.0.0.1", 00:22:34.934 "trsvcid": "55428" 00:22:34.934 }, 00:22:34.934 "auth": { 00:22:34.934 "state": "completed", 00:22:34.934 "digest": "sha512", 00:22:34.934 "dhgroup": "ffdhe3072" 00:22:34.934 } 00:22:34.934 } 00:22:34.934 ]' 00:22:34.934 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.195 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.195 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.195 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.195 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.195 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.195 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.195 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.195 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:35.195 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:36.137 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.138 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.138 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.399 00:22:36.399 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.399 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.399 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.659 { 00:22:36.659 "cntlid": 119, 00:22:36.659 "qid": 0, 00:22:36.659 "state": "enabled", 00:22:36.659 "thread": "nvmf_tgt_poll_group_000", 00:22:36.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.659 "listen_address": { 00:22:36.659 "trtype": "TCP", 00:22:36.659 "adrfam": "IPv4", 00:22:36.659 "traddr": "10.0.0.2", 00:22:36.659 "trsvcid": "4420" 00:22:36.659 }, 00:22:36.659 "peer_address": { 00:22:36.659 "trtype": "TCP", 00:22:36.659 "adrfam": "IPv4", 00:22:36.659 "traddr": "10.0.0.1", 00:22:36.659 "trsvcid": "55450" 00:22:36.659 }, 00:22:36.659 "auth": { 00:22:36.659 "state": "completed", 00:22:36.659 "digest": "sha512", 00:22:36.659 "dhgroup": "ffdhe3072" 00:22:36.659 } 00:22:36.659 } 00:22:36.659 ]' 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.659 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.920 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.920 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.920 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.920 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:36.920 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:37.491 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.491 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.491 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.491 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.752 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.014 00:22:38.014 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.014 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.014 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.275 { 00:22:38.275 "cntlid": 121, 00:22:38.275 "qid": 0, 00:22:38.275 "state": "enabled", 00:22:38.275 "thread": "nvmf_tgt_poll_group_000", 00:22:38.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:38.275 "listen_address": { 00:22:38.275 "trtype": "TCP", 00:22:38.275 "adrfam": "IPv4", 00:22:38.275 "traddr": "10.0.0.2", 00:22:38.275 "trsvcid": "4420" 00:22:38.275 }, 00:22:38.275 "peer_address": { 00:22:38.275 "trtype": "TCP", 00:22:38.275 "adrfam": "IPv4", 00:22:38.275 "traddr": "10.0.0.1", 00:22:38.275 "trsvcid": "55480" 00:22:38.275 }, 00:22:38.275 "auth": { 00:22:38.275 "state": "completed", 00:22:38.275 "digest": "sha512", 00:22:38.275 "dhgroup": "ffdhe4096" 00:22:38.275 } 00:22:38.275 } 00:22:38.275 ]' 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.275 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.536 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.536 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.536 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.536 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:38.536 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:39.108 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.369 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.631 00:22:39.631 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.631 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.631 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.892 { 00:22:39.892 "cntlid": 123, 00:22:39.892 "qid": 0, 00:22:39.892 "state": "enabled", 00:22:39.892 "thread": "nvmf_tgt_poll_group_000", 00:22:39.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:39.892 "listen_address": { 00:22:39.892 "trtype": "TCP", 00:22:39.892 "adrfam": "IPv4", 00:22:39.892 "traddr": "10.0.0.2", 00:22:39.892 "trsvcid": "4420" 00:22:39.892 }, 00:22:39.892 "peer_address": { 00:22:39.892 "trtype": "TCP", 00:22:39.892 "adrfam": "IPv4", 00:22:39.892 "traddr": "10.0.0.1", 00:22:39.892 "trsvcid": "55494" 00:22:39.892 }, 00:22:39.892 "auth": { 00:22:39.892 "state": "completed", 00:22:39.892 "digest": "sha512", 00:22:39.892 "dhgroup": "ffdhe4096" 00:22:39.892 } 00:22:39.892 } 00:22:39.892 ]' 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.892 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.153 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.153 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.153 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.153 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.153 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.153 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:40.153 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.095 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.356 00:22:41.356 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.356 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.356 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.618 { 00:22:41.618 "cntlid": 125, 00:22:41.618 "qid": 0, 00:22:41.618 "state": "enabled", 00:22:41.618 "thread": "nvmf_tgt_poll_group_000", 00:22:41.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:41.618 "listen_address": { 00:22:41.618 "trtype": "TCP", 00:22:41.618 "adrfam": "IPv4", 00:22:41.618 "traddr": "10.0.0.2", 00:22:41.618 "trsvcid": "4420" 00:22:41.618 }, 00:22:41.618 "peer_address": { 00:22:41.618 "trtype": "TCP", 00:22:41.618 "adrfam": "IPv4", 00:22:41.618 "traddr": "10.0.0.1", 00:22:41.618 "trsvcid": "55508" 00:22:41.618 }, 00:22:41.618 "auth": { 00:22:41.618 "state": "completed", 00:22:41.618 "digest": "sha512", 00:22:41.618 "dhgroup": "ffdhe4096" 00:22:41.618 } 00:22:41.618 } 00:22:41.618 ]' 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.618 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.879 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:41.879 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.451 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.714 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.977 00:22:42.977 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.977 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.977 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.238 { 00:22:43.238 "cntlid": 127, 00:22:43.238 "qid": 0, 00:22:43.238 "state": "enabled", 00:22:43.238 "thread": "nvmf_tgt_poll_group_000", 00:22:43.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:43.238 "listen_address": { 00:22:43.238 "trtype": "TCP", 00:22:43.238 "adrfam": "IPv4", 00:22:43.238 "traddr": "10.0.0.2", 00:22:43.238 "trsvcid": "4420" 00:22:43.238 }, 00:22:43.238 "peer_address": { 00:22:43.238 "trtype": "TCP", 00:22:43.238 "adrfam": "IPv4", 00:22:43.238 "traddr": "10.0.0.1", 00:22:43.238 "trsvcid": "43276" 00:22:43.238 }, 00:22:43.238 "auth": { 00:22:43.238 "state": "completed", 00:22:43.238 "digest": "sha512", 00:22:43.238 "dhgroup": "ffdhe4096" 00:22:43.238 } 00:22:43.238 } 00:22:43.238 ]' 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.238 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.500 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:43.500 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:44.071 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.904 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.904 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.904 { 00:22:44.904 "cntlid": 129, 00:22:44.904 "qid": 0, 00:22:44.904 "state": "enabled", 00:22:44.904 "thread": "nvmf_tgt_poll_group_000", 00:22:44.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.904 "listen_address": { 00:22:44.904 "trtype": "TCP", 00:22:44.904 "adrfam": "IPv4", 00:22:44.904 "traddr": "10.0.0.2", 00:22:44.904 "trsvcid": "4420" 00:22:44.904 }, 00:22:44.904 "peer_address": { 00:22:44.904 "trtype": "TCP", 00:22:44.905 "adrfam": "IPv4", 00:22:44.905 "traddr": "10.0.0.1", 00:22:44.905 "trsvcid": "43292" 00:22:44.905 }, 00:22:44.905 "auth": { 00:22:44.905 "state": "completed", 00:22:44.905 "digest": "sha512", 00:22:44.905 "dhgroup": "ffdhe6144" 00:22:44.905 } 00:22:44.905 } 00:22:44.905 ]' 00:22:44.905 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.905 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.905 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.905 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:44.905 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.167 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.167 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.167 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.167 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:45.167 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.110 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.110 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.110 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.110 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.110 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.371 00:22:46.371 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.371 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.371 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.631 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.631 { 00:22:46.631 "cntlid": 131, 00:22:46.631 "qid": 0, 00:22:46.631 "state": "enabled", 00:22:46.631 "thread": "nvmf_tgt_poll_group_000", 00:22:46.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.631 "listen_address": { 00:22:46.631 "trtype": "TCP", 00:22:46.631 "adrfam": "IPv4", 00:22:46.631 "traddr": "10.0.0.2", 00:22:46.632 "trsvcid": "4420" 00:22:46.632 }, 00:22:46.632 "peer_address": { 00:22:46.632 "trtype": "TCP", 00:22:46.632 "adrfam": "IPv4", 00:22:46.632 "traddr": "10.0.0.1", 00:22:46.632 "trsvcid": "43332" 00:22:46.632 }, 00:22:46.632 "auth": { 00:22:46.632 "state": "completed", 00:22:46.632 "digest": "sha512", 00:22:46.632 "dhgroup": "ffdhe6144" 00:22:46.632 } 00:22:46.632 } 00:22:46.632 ]' 00:22:46.632 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.632 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.632 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:46.893 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.836 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.096 00:22:48.096 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.096 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.096 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.357 { 00:22:48.357 "cntlid": 133, 00:22:48.357 "qid": 0, 00:22:48.357 "state": "enabled", 00:22:48.357 "thread": "nvmf_tgt_poll_group_000", 00:22:48.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.357 "listen_address": { 00:22:48.357 "trtype": "TCP", 00:22:48.357 "adrfam": "IPv4", 00:22:48.357 "traddr": "10.0.0.2", 00:22:48.357 "trsvcid": "4420" 00:22:48.357 }, 00:22:48.357 "peer_address": { 00:22:48.357 "trtype": "TCP", 00:22:48.357 "adrfam": "IPv4", 00:22:48.357 "traddr": "10.0.0.1", 00:22:48.357 "trsvcid": "43376" 00:22:48.357 }, 00:22:48.357 "auth": { 00:22:48.357 "state": "completed", 00:22:48.357 "digest": "sha512", 00:22:48.357 "dhgroup": "ffdhe6144" 00:22:48.357 } 00:22:48.357 } 00:22:48.357 ]' 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.357 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.618 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.618 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.618 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.618 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:48.619 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:49.261 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.261 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.261 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.261 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.599 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.883 00:22:49.883 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.883 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.883 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.143 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.143 { 00:22:50.143 "cntlid": 135, 00:22:50.143 "qid": 0, 00:22:50.143 "state": "enabled", 00:22:50.143 "thread": "nvmf_tgt_poll_group_000", 00:22:50.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:50.143 "listen_address": { 00:22:50.143 "trtype": "TCP", 00:22:50.143 "adrfam": "IPv4", 00:22:50.143 "traddr": "10.0.0.2", 00:22:50.143 "trsvcid": "4420" 00:22:50.143 }, 00:22:50.143 "peer_address": { 00:22:50.143 "trtype": "TCP", 00:22:50.143 "adrfam": "IPv4", 00:22:50.143 "traddr": "10.0.0.1", 00:22:50.143 "trsvcid": "43406" 00:22:50.143 }, 00:22:50.144 "auth": { 00:22:50.144 "state": "completed", 00:22:50.144 "digest": "sha512", 00:22:50.144 "dhgroup": "ffdhe6144" 00:22:50.144 } 00:22:50.144 } 00:22:50.144 ]' 00:22:50.144 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.144 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.404 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:50.404 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:50.976 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.976 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:50.976 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.976 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.976 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.237 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.237 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.237 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.237 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.237 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.807 00:22:51.807 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.807 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.807 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.068 { 00:22:52.068 "cntlid": 137, 00:22:52.068 "qid": 0, 00:22:52.068 "state": "enabled", 00:22:52.068 "thread": "nvmf_tgt_poll_group_000", 00:22:52.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:52.068 "listen_address": { 00:22:52.068 "trtype": "TCP", 00:22:52.068 "adrfam": "IPv4", 00:22:52.068 "traddr": "10.0.0.2", 00:22:52.068 "trsvcid": "4420" 00:22:52.068 }, 00:22:52.068 "peer_address": { 00:22:52.068 "trtype": "TCP", 00:22:52.068 "adrfam": "IPv4", 00:22:52.068 "traddr": "10.0.0.1", 00:22:52.068 "trsvcid": "43428" 00:22:52.068 }, 00:22:52.068 "auth": { 00:22:52.068 "state": "completed", 00:22:52.068 "digest": "sha512", 00:22:52.068 "dhgroup": "ffdhe8192" 00:22:52.068 } 00:22:52.068 } 00:22:52.068 ]' 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.068 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.328 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:52.328 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:52.898 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.160 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.160 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.160 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.160 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.160 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.730 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.730 { 00:22:53.730 "cntlid": 139, 00:22:53.730 "qid": 0, 00:22:53.730 "state": "enabled", 00:22:53.730 "thread": "nvmf_tgt_poll_group_000", 00:22:53.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.730 "listen_address": { 00:22:53.730 "trtype": "TCP", 00:22:53.730 "adrfam": "IPv4", 00:22:53.730 "traddr": "10.0.0.2", 00:22:53.730 "trsvcid": "4420" 00:22:53.730 }, 00:22:53.730 "peer_address": { 00:22:53.730 "trtype": "TCP", 00:22:53.730 "adrfam": "IPv4", 00:22:53.730 "traddr": "10.0.0.1", 00:22:53.730 "trsvcid": "54404" 00:22:53.730 }, 00:22:53.730 "auth": { 00:22:53.730 "state": "completed", 00:22:53.730 "digest": "sha512", 00:22:53.730 "dhgroup": "ffdhe8192" 00:22:53.730 } 00:22:53.730 } 00:22:53.730 ]' 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.730 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:53.990 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: --dhchap-ctrl-secret DHHC-1:02:ZDgwYWRkMGY1MzBmZTc1ZWIwODZlZDZiOGRiYjBlYzI0NjY1ZjAxMGU4OWMzOTUw2x1tpg==: 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:54.933 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.934 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.506 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.506 { 00:22:55.506 "cntlid": 141, 00:22:55.506 "qid": 0, 00:22:55.506 "state": "enabled", 00:22:55.506 "thread": "nvmf_tgt_poll_group_000", 00:22:55.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:55.506 "listen_address": { 00:22:55.506 "trtype": "TCP", 00:22:55.506 "adrfam": "IPv4", 00:22:55.506 "traddr": "10.0.0.2", 00:22:55.506 "trsvcid": "4420" 00:22:55.506 }, 00:22:55.506 "peer_address": { 00:22:55.506 "trtype": "TCP", 00:22:55.506 "adrfam": "IPv4", 00:22:55.506 "traddr": "10.0.0.1", 00:22:55.506 "trsvcid": "54440" 00:22:55.506 }, 00:22:55.506 "auth": { 00:22:55.506 "state": "completed", 00:22:55.506 "digest": "sha512", 00:22:55.506 "dhgroup": "ffdhe8192" 00:22:55.506 } 00:22:55.506 } 00:22:55.506 ]' 00:22:55.506 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.768 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.028 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:56.028 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:01:YjA5OWJjNDQ1NTdiMzg4M2Y4ZGQwNzE1MTc4ZjQ0NWZetZJ3: 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.599 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.860 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.120 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.381 { 00:22:57.381 "cntlid": 143, 00:22:57.381 "qid": 0, 00:22:57.381 "state": "enabled", 00:22:57.381 "thread": "nvmf_tgt_poll_group_000", 00:22:57.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:57.381 "listen_address": { 00:22:57.381 "trtype": "TCP", 00:22:57.381 "adrfam": "IPv4", 00:22:57.381 "traddr": "10.0.0.2", 00:22:57.381 "trsvcid": "4420" 00:22:57.381 }, 00:22:57.381 "peer_address": { 00:22:57.381 "trtype": "TCP", 00:22:57.381 "adrfam": "IPv4", 00:22:57.381 "traddr": "10.0.0.1", 00:22:57.381 "trsvcid": "54462" 00:22:57.381 }, 00:22:57.381 "auth": { 00:22:57.381 "state": "completed", 00:22:57.381 "digest": "sha512", 00:22:57.381 "dhgroup": "ffdhe8192" 00:22:57.381 } 00:22:57.381 } 00:22:57.381 ]' 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.381 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:57.642 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.584 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.155 00:22:59.155 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.155 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.155 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.155 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.155 { 00:22:59.155 "cntlid": 145, 00:22:59.156 "qid": 0, 00:22:59.156 "state": "enabled", 00:22:59.156 "thread": "nvmf_tgt_poll_group_000", 00:22:59.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:59.156 "listen_address": { 00:22:59.156 "trtype": "TCP", 00:22:59.156 "adrfam": "IPv4", 00:22:59.156 "traddr": "10.0.0.2", 00:22:59.156 "trsvcid": "4420" 00:22:59.156 }, 00:22:59.156 "peer_address": { 00:22:59.156 "trtype": "TCP", 00:22:59.156 "adrfam": "IPv4", 00:22:59.156 "traddr": "10.0.0.1", 00:22:59.156 "trsvcid": "54484" 00:22:59.156 }, 00:22:59.156 "auth": { 00:22:59.156 "state": "completed", 00:22:59.156 "digest": "sha512", 00:22:59.156 "dhgroup": "ffdhe8192" 00:22:59.156 } 00:22:59.156 } 00:22:59.156 ]' 00:22:59.156 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.415 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.676 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:22:59.676 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTExNWQ0MGFlZWY3OWQ3NWNmZDJhZTQyMjg4MzhiYWVkNjk4ZGY2OTRlYTRlZDg2PikoXg==: --dhchap-ctrl-secret DHHC-1:03:NmEzNDZlMmUyMTc0ODE1ZDFjNDk5YjFmMzljNTI2MDhkY2IzYTVjNTU5YzI4N2NhNGE5ZGUyMDRjNTYxZTRjZHF4p6U=: 00:23:00.247 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.247 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.247 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.247 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.247 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:00.248 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:00.820 request: 00:23:00.820 { 00:23:00.820 "name": "nvme0", 00:23:00.820 "trtype": "tcp", 00:23:00.820 "traddr": "10.0.0.2", 00:23:00.820 "adrfam": "ipv4", 00:23:00.820 "trsvcid": "4420", 00:23:00.820 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:00.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:00.820 "prchk_reftag": false, 00:23:00.820 "prchk_guard": false, 00:23:00.820 "hdgst": false, 00:23:00.820 "ddgst": false, 00:23:00.820 "dhchap_key": "key2", 00:23:00.820 "allow_unrecognized_csi": false, 00:23:00.820 "method": "bdev_nvme_attach_controller", 00:23:00.820 "req_id": 1 00:23:00.820 } 00:23:00.820 Got JSON-RPC error response 00:23:00.820 response: 00:23:00.820 { 00:23:00.820 "code": -5, 00:23:00.820 "message": "Input/output error" 00:23:00.820 } 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:00.820 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:01.081 request: 00:23:01.081 { 00:23:01.081 "name": "nvme0", 00:23:01.081 "trtype": "tcp", 00:23:01.081 "traddr": "10.0.0.2", 00:23:01.081 "adrfam": "ipv4", 00:23:01.081 "trsvcid": "4420", 00:23:01.081 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:01.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:01.081 "prchk_reftag": false, 00:23:01.081 "prchk_guard": false, 00:23:01.081 "hdgst": false, 00:23:01.081 "ddgst": false, 00:23:01.081 "dhchap_key": "key1", 00:23:01.081 "dhchap_ctrlr_key": "ckey2", 00:23:01.081 "allow_unrecognized_csi": false, 00:23:01.082 "method": "bdev_nvme_attach_controller", 00:23:01.082 "req_id": 1 00:23:01.082 } 00:23:01.082 Got JSON-RPC error response 00:23:01.082 response: 00:23:01.082 { 00:23:01.082 "code": -5, 00:23:01.082 "message": "Input/output error" 00:23:01.082 } 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.082 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.652 request: 00:23:01.652 { 00:23:01.652 "name": "nvme0", 00:23:01.652 "trtype": "tcp", 00:23:01.652 "traddr": "10.0.0.2", 00:23:01.652 "adrfam": "ipv4", 00:23:01.652 "trsvcid": "4420", 00:23:01.652 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:01.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:01.652 "prchk_reftag": false, 00:23:01.652 "prchk_guard": false, 00:23:01.652 "hdgst": false, 00:23:01.652 "ddgst": false, 00:23:01.652 "dhchap_key": "key1", 00:23:01.652 "dhchap_ctrlr_key": "ckey1", 00:23:01.652 "allow_unrecognized_csi": false, 00:23:01.652 "method": "bdev_nvme_attach_controller", 00:23:01.652 "req_id": 1 00:23:01.652 } 00:23:01.652 Got JSON-RPC error response 00:23:01.652 response: 00:23:01.652 { 00:23:01.652 "code": -5, 00:23:01.652 "message": "Input/output error" 00:23:01.652 } 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 344918 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 344918 ']' 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 344918 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344918 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344918' 00:23:01.652 killing process with pid 344918 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 344918 00:23:01.652 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 344918 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=371442 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 371442 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 371442 ']' 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.912 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 371442 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 371442 ']' 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.853 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 null0 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7GS 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.LgI ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LgI 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.w18 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.wua ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wua 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O0t 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7aq ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7aq 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oTc 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.114 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.115 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.115 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:04.056 nvme0n1 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.056 { 00:23:04.056 "cntlid": 1, 00:23:04.056 "qid": 0, 00:23:04.056 "state": "enabled", 00:23:04.056 "thread": "nvmf_tgt_poll_group_000", 00:23:04.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:04.056 "listen_address": { 00:23:04.056 "trtype": "TCP", 00:23:04.056 "adrfam": "IPv4", 00:23:04.056 "traddr": "10.0.0.2", 00:23:04.056 "trsvcid": "4420" 00:23:04.056 }, 00:23:04.056 "peer_address": { 00:23:04.056 "trtype": "TCP", 00:23:04.056 "adrfam": "IPv4", 00:23:04.056 "traddr": "10.0.0.1", 00:23:04.056 "trsvcid": "54766" 00:23:04.056 }, 00:23:04.056 "auth": { 00:23:04.056 "state": "completed", 00:23:04.056 "digest": "sha512", 00:23:04.056 "dhgroup": "ffdhe8192" 00:23:04.056 } 00:23:04.056 } 00:23:04.056 ]' 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.056 17:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.056 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.056 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.317 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.317 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.317 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.317 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:23:04.317 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:23:04.888 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:05.148 17:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:05.148 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:05.148 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.148 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:05.148 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.149 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.409 request: 00:23:05.409 { 00:23:05.409 "name": "nvme0", 00:23:05.409 "trtype": "tcp", 00:23:05.409 "traddr": "10.0.0.2", 00:23:05.409 "adrfam": "ipv4", 00:23:05.409 "trsvcid": "4420", 00:23:05.409 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:05.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:05.409 "prchk_reftag": false, 00:23:05.409 "prchk_guard": false, 00:23:05.409 "hdgst": false, 00:23:05.409 "ddgst": false, 00:23:05.409 "dhchap_key": "key3", 00:23:05.409 "allow_unrecognized_csi": false, 00:23:05.409 "method": "bdev_nvme_attach_controller", 00:23:05.409 "req_id": 1 00:23:05.409 } 00:23:05.409 Got JSON-RPC error response 00:23:05.409 response: 00:23:05.409 { 00:23:05.409 "code": -5, 00:23:05.409 "message": "Input/output error" 00:23:05.409 } 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:05.409 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.670 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.670 request: 00:23:05.670 { 00:23:05.670 "name": "nvme0", 00:23:05.670 "trtype": "tcp", 00:23:05.670 "traddr": "10.0.0.2", 00:23:05.670 "adrfam": "ipv4", 00:23:05.670 "trsvcid": "4420", 00:23:05.670 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:05.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:05.670 "prchk_reftag": false, 00:23:05.670 "prchk_guard": false, 00:23:05.670 "hdgst": false, 00:23:05.670 "ddgst": false, 00:23:05.670 "dhchap_key": "key3", 00:23:05.670 "allow_unrecognized_csi": false, 00:23:05.670 "method": "bdev_nvme_attach_controller", 00:23:05.670 "req_id": 1 00:23:05.670 } 00:23:05.671 Got JSON-RPC error response 00:23:05.671 response: 00:23:05.671 { 00:23:05.671 "code": -5, 00:23:05.671 "message": "Input/output error" 00:23:05.671 } 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.671 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:05.932 17:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:06.192 request: 00:23:06.192 { 00:23:06.192 "name": "nvme0", 00:23:06.192 "trtype": "tcp", 00:23:06.192 "traddr": "10.0.0.2", 00:23:06.192 "adrfam": "ipv4", 00:23:06.192 "trsvcid": "4420", 00:23:06.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:06.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:06.192 "prchk_reftag": false, 00:23:06.192 "prchk_guard": false, 00:23:06.192 "hdgst": false, 00:23:06.192 "ddgst": false, 00:23:06.192 "dhchap_key": "key0", 00:23:06.192 "dhchap_ctrlr_key": "key1", 00:23:06.192 "allow_unrecognized_csi": false, 00:23:06.192 "method": "bdev_nvme_attach_controller", 00:23:06.192 "req_id": 1 00:23:06.192 } 00:23:06.192 Got JSON-RPC error response 00:23:06.192 response: 00:23:06.192 { 00:23:06.192 "code": -5, 00:23:06.192 "message": "Input/output error" 00:23:06.192 } 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:06.192 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:06.452 nvme0n1 00:23:06.452 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:06.453 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:06.453 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.712 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.712 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.712 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:06.973 17:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:07.915 nvme0n1 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.915 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:08.176 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.176 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:23:08.176 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: --dhchap-ctrl-secret DHHC-1:03:YjdlZTNiOGU1OGFmMGM5MWE1NjFiOWZmNjMwYTE0MWI2YmJiYWIzNmU2YWQ2ZDhlOWMwMTg0ODcxYTJlYTY0MhwTWDQ=: 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.746 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:09.007 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:09.267 request: 00:23:09.267 { 00:23:09.267 "name": "nvme0", 00:23:09.267 "trtype": "tcp", 00:23:09.267 "traddr": "10.0.0.2", 00:23:09.267 "adrfam": "ipv4", 00:23:09.267 "trsvcid": "4420", 00:23:09.267 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:09.267 "prchk_reftag": false, 00:23:09.267 "prchk_guard": false, 00:23:09.267 "hdgst": false, 00:23:09.267 "ddgst": false, 00:23:09.267 "dhchap_key": "key1", 00:23:09.267 "allow_unrecognized_csi": false, 00:23:09.267 "method": "bdev_nvme_attach_controller", 00:23:09.267 "req_id": 1 00:23:09.267 } 00:23:09.267 Got JSON-RPC error response 00:23:09.267 response: 00:23:09.267 { 00:23:09.267 "code": -5, 00:23:09.267 "message": "Input/output error" 00:23:09.267 } 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:09.267 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:10.207 nvme0n1 00:23:10.207 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:10.207 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:10.207 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.207 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.207 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.207 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.466 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.466 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.467 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.467 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.467 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:10.467 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:10.467 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:10.726 nvme0n1 00:23:10.726 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:10.726 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:10.726 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.986 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: '' 2s 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: ]] 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTQzYjEzOGQwYmY2YTYzNzBjNGRkYzcxYTAwZDJiNGKDNXVr: 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:10.987 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:13.536 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: 2s 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:13.536 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:13.537 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: ]] 00:23:13.537 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzBhNWY5NDI3ODUzMjE3Yzk0YzQ4YTJkMTBmNDBjM2ZmYmNkYzA2Yzk2NWMwOWIy4Mh+LQ==: 00:23:13.537 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:13.537 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:15.452 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:16.022 nvme0n1 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.022 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:16.595 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:16.855 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:16.855 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.855 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:17.115 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.116 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.376 request: 00:23:17.376 { 00:23:17.376 "name": "nvme0", 00:23:17.376 "dhchap_key": "key1", 00:23:17.376 "dhchap_ctrlr_key": "key3", 00:23:17.376 "method": "bdev_nvme_set_keys", 00:23:17.376 "req_id": 1 00:23:17.376 } 00:23:17.376 Got JSON-RPC error response 00:23:17.376 response: 00:23:17.376 { 00:23:17.376 "code": -13, 00:23:17.376 "message": "Permission denied" 00:23:17.376 } 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.376 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:17.636 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:17.636 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:18.577 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:18.577 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:18.577 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:18.837 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:19.778 nvme0n1 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:19.778 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:20.038 request: 00:23:20.038 { 00:23:20.038 "name": "nvme0", 00:23:20.038 "dhchap_key": "key2", 00:23:20.038 "dhchap_ctrlr_key": "key0", 00:23:20.038 "method": "bdev_nvme_set_keys", 00:23:20.038 "req_id": 1 00:23:20.038 } 00:23:20.038 Got JSON-RPC error response 00:23:20.038 response: 00:23:20.038 { 00:23:20.038 "code": -13, 00:23:20.038 "message": "Permission denied" 00:23:20.038 } 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:20.038 17:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.299 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:20.299 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:21.240 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:21.240 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:21.240 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 344951 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 344951 ']' 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 344951 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344951 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:21.500 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344951' 00:23:21.501 killing process with pid 344951 00:23:21.501 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 344951 00:23:21.501 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 344951 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.762 rmmod nvme_tcp 00:23:21.762 rmmod nvme_fabrics 00:23:21.762 rmmod nvme_keyring 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 371442 ']' 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 371442 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 371442 ']' 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 371442 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.762 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 371442 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 371442' 00:23:22.023 killing process with pid 371442 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 371442 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 371442 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.023 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.571 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.571 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.7GS /tmp/spdk.key-sha256.w18 /tmp/spdk.key-sha384.O0t /tmp/spdk.key-sha512.oTc /tmp/spdk.key-sha512.LgI /tmp/spdk.key-sha384.wua /tmp/spdk.key-sha256.7aq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:24.571 00:23:24.571 real 2m40.543s 00:23:24.571 user 6m0.253s 00:23:24.571 sys 0m24.921s 00:23:24.571 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.571 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.571 ************************************ 00:23:24.571 END TEST nvmf_auth_target 00:23:24.571 ************************************ 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.571 ************************************ 00:23:24.571 START TEST nvmf_bdevio_no_huge 00:23:24.571 ************************************ 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:24.571 * Looking for test storage... 00:23:24.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.571 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.572 --rc genhtml_branch_coverage=1 00:23:24.572 --rc genhtml_function_coverage=1 00:23:24.572 --rc genhtml_legend=1 00:23:24.572 --rc geninfo_all_blocks=1 00:23:24.572 --rc geninfo_unexecuted_blocks=1 00:23:24.572 00:23:24.572 ' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.572 --rc genhtml_branch_coverage=1 00:23:24.572 --rc genhtml_function_coverage=1 00:23:24.572 --rc genhtml_legend=1 00:23:24.572 --rc geninfo_all_blocks=1 00:23:24.572 --rc geninfo_unexecuted_blocks=1 00:23:24.572 00:23:24.572 ' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.572 --rc genhtml_branch_coverage=1 00:23:24.572 --rc genhtml_function_coverage=1 00:23:24.572 --rc genhtml_legend=1 00:23:24.572 --rc geninfo_all_blocks=1 00:23:24.572 --rc geninfo_unexecuted_blocks=1 00:23:24.572 00:23:24.572 ' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.572 --rc genhtml_branch_coverage=1 00:23:24.572 --rc genhtml_function_coverage=1 00:23:24.572 --rc genhtml_legend=1 00:23:24.572 --rc geninfo_all_blocks=1 00:23:24.572 --rc geninfo_unexecuted_blocks=1 00:23:24.572 00:23:24.572 ' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.572 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.716 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:32.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:32.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:32.717 Found net devices under 0000:31:00.0: cvl_0_0 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:32.717 Found net devices under 0000:31:00.1: cvl_0_1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:23:32.717 00:23:32.717 --- 10.0.0.2 ping statistics --- 00:23:32.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.717 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:23:32.717 00:23:32.717 --- 10.0.0.1 ping statistics --- 00:23:32.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.717 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:32.717 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=380235 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 380235 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 380235 ']' 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.718 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.718 [2024-10-08 17:39:23.846464] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:23:32.718 [2024-10-08 17:39:23.846538] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:32.718 [2024-10-08 17:39:23.944093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.718 [2024-10-08 17:39:24.049674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.718 [2024-10-08 17:39:24.049729] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.718 [2024-10-08 17:39:24.049737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.718 [2024-10-08 17:39:24.049744] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.718 [2024-10-08 17:39:24.049751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.718 [2024-10-08 17:39:24.051284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.718 [2024-10-08 17:39:24.051446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:23:32.718 [2024-10-08 17:39:24.051603] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.718 [2024-10-08 17:39:24.051604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:23:32.718 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.718 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:32.718 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:32.718 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.718 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 [2024-10-08 17:39:24.721968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 Malloc0 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 [2024-10-08 17:39:24.775824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:32.979 { 00:23:32.979 "params": { 00:23:32.979 "name": "Nvme$subsystem", 00:23:32.979 "trtype": "$TEST_TRANSPORT", 00:23:32.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.979 "adrfam": "ipv4", 00:23:32.979 "trsvcid": "$NVMF_PORT", 00:23:32.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.979 "hdgst": ${hdgst:-false}, 00:23:32.979 "ddgst": ${ddgst:-false} 00:23:32.979 }, 00:23:32.979 "method": "bdev_nvme_attach_controller" 00:23:32.979 } 00:23:32.979 EOF 00:23:32.979 )") 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:32.979 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:32.979 "params": { 00:23:32.979 "name": "Nvme1", 00:23:32.979 "trtype": "tcp", 00:23:32.979 "traddr": "10.0.0.2", 00:23:32.979 "adrfam": "ipv4", 00:23:32.979 "trsvcid": "4420", 00:23:32.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.979 "hdgst": false, 00:23:32.979 "ddgst": false 00:23:32.979 }, 00:23:32.979 "method": "bdev_nvme_attach_controller" 00:23:32.979 }' 00:23:32.979 [2024-10-08 17:39:24.834494] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:23:32.979 [2024-10-08 17:39:24.834564] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid380563 ] 00:23:32.979 [2024-10-08 17:39:24.921227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.240 [2024-10-08 17:39:25.027366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.240 [2024-10-08 17:39:25.027527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.240 [2024-10-08 17:39:25.027527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.500 I/O targets: 00:23:33.500 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:33.500 00:23:33.500 00:23:33.500 CUnit - A unit testing framework for C - Version 2.1-3 00:23:33.500 http://cunit.sourceforge.net/ 00:23:33.500 00:23:33.500 00:23:33.500 Suite: bdevio tests on: Nvme1n1 00:23:33.500 Test: blockdev write read block ...passed 00:23:33.500 Test: blockdev write zeroes read block ...passed 00:23:33.500 Test: blockdev write zeroes read no split ...passed 00:23:33.500 Test: blockdev write zeroes read split ...passed 00:23:33.500 Test: blockdev write zeroes read split partial ...passed 00:23:33.500 Test: blockdev reset ...[2024-10-08 17:39:25.472980] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.500 [2024-10-08 17:39:25.473079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10950b0 (9): Bad file descriptor 00:23:33.760 [2024-10-08 17:39:25.616008] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:33.761 passed 00:23:33.761 Test: blockdev write read 8 blocks ...passed 00:23:33.761 Test: blockdev write read size > 128k ...passed 00:23:33.761 Test: blockdev write read invalid size ...passed 00:23:33.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:33.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:33.761 Test: blockdev write read max offset ...passed 00:23:34.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:34.022 Test: blockdev writev readv 8 blocks ...passed 00:23:34.022 Test: blockdev writev readv 30 x 1block ...passed 00:23:34.022 Test: blockdev writev readv block ...passed 00:23:34.023 Test: blockdev writev readv size > 128k ...passed 00:23:34.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:34.023 Test: blockdev comparev and writev ...[2024-10-08 17:39:25.924623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.924674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.924692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.924702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.925268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.925282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.925312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.925862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.925875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.925899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.926437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.926450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:25.926464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:34.023 [2024-10-08 17:39:25.926472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.023 passed 00:23:34.023 Test: blockdev nvme passthru rw ...passed 00:23:34.023 Test: blockdev nvme passthru vendor specific ...[2024-10-08 17:39:26.011824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.023 [2024-10-08 17:39:26.011845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:26.012210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.023 [2024-10-08 17:39:26.012222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:26.012606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.023 [2024-10-08 17:39:26.012616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.023 [2024-10-08 17:39:26.012998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:34.023 [2024-10-08 17:39:26.013011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.023 passed 00:23:34.284 Test: blockdev nvme admin passthru ...passed 00:23:34.284 Test: blockdev copy ...passed 00:23:34.284 00:23:34.284 Run Summary: Type Total Ran Passed Failed Inactive 00:23:34.284 suites 1 1 n/a 0 0 00:23:34.284 tests 23 23 23 0 0 00:23:34.284 asserts 152 152 152 0 n/a 00:23:34.284 00:23:34.284 Elapsed time = 1.486 seconds 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.544 rmmod nvme_tcp 00:23:34.544 rmmod nvme_fabrics 00:23:34.544 rmmod nvme_keyring 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 380235 ']' 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 380235 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 380235 ']' 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 380235 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380235 00:23:34.544 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380235' 00:23:34.804 killing process with pid 380235 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 380235 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 380235 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.804 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.349 00:23:37.349 real 0m12.782s 00:23:37.349 user 0m15.715s 00:23:37.349 sys 0m6.700s 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.349 ************************************ 00:23:37.349 END TEST nvmf_bdevio_no_huge 00:23:37.349 ************************************ 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:37.349 ************************************ 00:23:37.349 START TEST nvmf_tls 00:23:37.349 ************************************ 00:23:37.349 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:37.349 * Looking for test storage... 00:23:37.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:37.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.349 --rc genhtml_branch_coverage=1 00:23:37.349 --rc genhtml_function_coverage=1 00:23:37.349 --rc genhtml_legend=1 00:23:37.349 --rc geninfo_all_blocks=1 00:23:37.349 --rc geninfo_unexecuted_blocks=1 00:23:37.349 00:23:37.349 ' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:37.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.349 --rc genhtml_branch_coverage=1 00:23:37.349 --rc genhtml_function_coverage=1 00:23:37.349 --rc genhtml_legend=1 00:23:37.349 --rc geninfo_all_blocks=1 00:23:37.349 --rc geninfo_unexecuted_blocks=1 00:23:37.349 00:23:37.349 ' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:37.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.349 --rc genhtml_branch_coverage=1 00:23:37.349 --rc genhtml_function_coverage=1 00:23:37.349 --rc genhtml_legend=1 00:23:37.349 --rc geninfo_all_blocks=1 00:23:37.349 --rc geninfo_unexecuted_blocks=1 00:23:37.349 00:23:37.349 ' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:37.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.349 --rc genhtml_branch_coverage=1 00:23:37.349 --rc genhtml_function_coverage=1 00:23:37.349 --rc genhtml_legend=1 00:23:37.349 --rc geninfo_all_blocks=1 00:23:37.349 --rc geninfo_unexecuted_blocks=1 00:23:37.349 00:23:37.349 ' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.349 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.350 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:45.492 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:45.492 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:45.492 Found net devices under 0000:31:00.0: cvl_0_0 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:45.492 Found net devices under 0000:31:00.1: cvl_0_1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:23:45.492 00:23:45.492 --- 10.0.0.2 ping statistics --- 00:23:45.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.492 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:23:45.492 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:45.492 00:23:45.492 --- 10.0.0.1 ping statistics --- 00:23:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.493 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=385259 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 385259 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 385259 ']' 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.493 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.493 [2024-10-08 17:39:36.947739] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:23:45.493 [2024-10-08 17:39:36.947801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.493 [2024-10-08 17:39:37.039958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.493 [2024-10-08 17:39:37.131871] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.493 [2024-10-08 17:39:37.131937] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.493 [2024-10-08 17:39:37.131946] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.493 [2024-10-08 17:39:37.131953] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.493 [2024-10-08 17:39:37.131959] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.493 [2024-10-08 17:39:37.132772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:46.067 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:46.067 true 00:23:46.067 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:46.067 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:46.327 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:46.328 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:46.328 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:46.588 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:46.588 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:46.588 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:46.848 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:46.848 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:46.848 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:46.848 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:47.109 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:47.109 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:47.109 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:47.109 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:47.370 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:47.370 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:47.370 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:47.370 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:47.370 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:47.631 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:47.631 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:47.631 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:47.893 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.GFGlMms9ws 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PVy2cOApQe 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GFGlMms9ws 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PVy2cOApQe 00:23:48.155 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:48.155 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:48.727 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.GFGlMms9ws 00:23:48.727 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GFGlMms9ws 00:23:48.727 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.727 [2024-10-08 17:39:40.561600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.727 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.988 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:48.988 [2024-10-08 17:39:40.890388] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.988 [2024-10-08 17:39:40.890742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.988 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.248 malloc0 00:23:49.248 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:49.509 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GFGlMms9ws 00:23:49.509 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.769 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GFGlMms9ws 00:23:59.765 Initializing NVMe Controllers 00:23:59.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.765 Initialization complete. Launching workers. 00:23:59.765 ======================================================== 00:23:59.765 Latency(us) 00:23:59.765 Device Information : IOPS MiB/s Average min max 00:23:59.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18605.91 72.68 3439.96 1120.29 4120.53 00:23:59.765 ======================================================== 00:23:59.765 Total : 18605.91 72.68 3439.96 1120.29 4120.53 00:23:59.765 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GFGlMms9ws 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GFGlMms9ws 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=388040 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 388040 /var/tmp/bdevperf.sock 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 388040 ']' 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.765 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.026 [2024-10-08 17:39:51.763491] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:00.026 [2024-10-08 17:39:51.763546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388040 ] 00:24:00.026 [2024-10-08 17:39:51.839775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.026 [2024-10-08 17:39:51.902214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.598 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.598 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.598 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GFGlMms9ws 00:24:00.859 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.119 [2024-10-08 17:39:52.869413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.119 TLSTESTn1 00:24:01.119 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:01.119 Running I/O for 10 seconds... 00:24:03.445 2727.00 IOPS, 10.65 MiB/s [2024-10-08T15:39:56.378Z] 3082.50 IOPS, 12.04 MiB/s [2024-10-08T15:39:57.321Z] 3028.67 IOPS, 11.83 MiB/s [2024-10-08T15:39:58.261Z] 3501.75 IOPS, 13.68 MiB/s [2024-10-08T15:39:59.203Z] 4077.80 IOPS, 15.93 MiB/s [2024-10-08T15:40:00.143Z] 3982.17 IOPS, 15.56 MiB/s [2024-10-08T15:40:01.084Z] 3900.14 IOPS, 15.23 MiB/s [2024-10-08T15:40:02.466Z] 3870.12 IOPS, 15.12 MiB/s [2024-10-08T15:40:03.407Z] 4147.78 IOPS, 16.20 MiB/s [2024-10-08T15:40:03.407Z] 4128.00 IOPS, 16.12 MiB/s 00:24:11.415 Latency(us) 00:24:11.415 [2024-10-08T15:40:03.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.415 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.415 Verification LBA range: start 0x0 length 0x2000 00:24:11.415 TLSTESTn1 : 10.08 4106.58 16.04 0.00 0.00 31050.32 5980.16 82575.36 00:24:11.415 [2024-10-08T15:40:03.407Z] =================================================================================================================== 00:24:11.415 [2024-10-08T15:40:03.407Z] Total : 4106.58 16.04 0.00 0.00 31050.32 5980.16 82575.36 00:24:11.415 { 00:24:11.415 "results": [ 00:24:11.415 { 00:24:11.415 "job": "TLSTESTn1", 00:24:11.415 "core_mask": "0x4", 00:24:11.415 "workload": "verify", 00:24:11.415 "status": "finished", 00:24:11.415 "verify_range": { 00:24:11.415 "start": 0, 00:24:11.415 "length": 8192 00:24:11.415 }, 00:24:11.415 "queue_depth": 128, 00:24:11.415 "io_size": 4096, 00:24:11.415 "runtime": 10.083076, 00:24:11.415 "iops": 4106.584141585366, 00:24:11.415 "mibps": 16.041344303067834, 00:24:11.416 "io_failed": 0, 00:24:11.416 "io_timeout": 0, 00:24:11.416 "avg_latency_us": 31050.31721979375, 00:24:11.416 "min_latency_us": 5980.16, 00:24:11.416 "max_latency_us": 82575.36 00:24:11.416 } 00:24:11.416 ], 00:24:11.416 "core_count": 1 00:24:11.416 } 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 388040 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 388040 ']' 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 388040 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 388040 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 388040' 00:24:11.416 killing process with pid 388040 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 388040 00:24:11.416 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.416 00:24:11.416 Latency(us) 00:24:11.416 [2024-10-08T15:40:03.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.416 [2024-10-08T15:40:03.408Z] =================================================================================================================== 00:24:11.416 [2024-10-08T15:40:03.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 388040 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PVy2cOApQe 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PVy2cOApQe 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PVy2cOApQe 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PVy2cOApQe 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=390378 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 390378 /var/tmp/bdevperf.sock 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 390378 ']' 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.416 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.676 [2024-10-08 17:40:03.435314] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:11.677 [2024-10-08 17:40:03.435371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390378 ] 00:24:11.677 [2024-10-08 17:40:03.511482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.677 [2024-10-08 17:40:03.563030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.247 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.247 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.247 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PVy2cOApQe 00:24:12.507 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.767 [2024-10-08 17:40:04.533372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.767 [2024-10-08 17:40:04.542628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:12.767 [2024-10-08 17:40:04.543457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98a20 (107): Transport endpoint is not connected 00:24:12.767 [2024-10-08 17:40:04.544454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98a20 (9): Bad file descriptor 00:24:12.767 [2024-10-08 17:40:04.545455] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:12.767 [2024-10-08 17:40:04.545465] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:12.767 [2024-10-08 17:40:04.545471] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:12.767 [2024-10-08 17:40:04.545478] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.767 request: 00:24:12.767 { 00:24:12.767 "name": "TLSTEST", 00:24:12.767 "trtype": "tcp", 00:24:12.767 "traddr": "10.0.0.2", 00:24:12.767 "adrfam": "ipv4", 00:24:12.767 "trsvcid": "4420", 00:24:12.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.767 "prchk_reftag": false, 00:24:12.767 "prchk_guard": false, 00:24:12.767 "hdgst": false, 00:24:12.767 "ddgst": false, 00:24:12.767 "psk": "key0", 00:24:12.767 "allow_unrecognized_csi": false, 00:24:12.767 "method": "bdev_nvme_attach_controller", 00:24:12.767 "req_id": 1 00:24:12.767 } 00:24:12.767 Got JSON-RPC error response 00:24:12.767 response: 00:24:12.767 { 00:24:12.767 "code": -5, 00:24:12.767 "message": "Input/output error" 00:24:12.767 } 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 390378 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 390378 ']' 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 390378 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390378 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390378' 00:24:12.767 killing process with pid 390378 00:24:12.767 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 390378 00:24:12.767 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.768 00:24:12.768 Latency(us) 00:24:12.768 [2024-10-08T15:40:04.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.768 [2024-10-08T15:40:04.760Z] =================================================================================================================== 00:24:12.768 [2024-10-08T15:40:04.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 390378 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GFGlMms9ws 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GFGlMms9ws 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GFGlMms9ws 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GFGlMms9ws 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=390721 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 390721 /var/tmp/bdevperf.sock 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 390721 ']' 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.768 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.028 [2024-10-08 17:40:04.790943] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:13.028 [2024-10-08 17:40:04.791020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390721 ] 00:24:13.028 [2024-10-08 17:40:04.875599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.028 [2024-10-08 17:40:04.927099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.599 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.599 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:13.599 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GFGlMms9ws 00:24:13.861 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:14.121 [2024-10-08 17:40:05.909501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.121 [2024-10-08 17:40:05.915877] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:14.121 [2024-10-08 17:40:05.915898] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:14.121 [2024-10-08 17:40:05.915919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:14.121 [2024-10-08 17:40:05.916658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a7a20 (107): Transport endpoint is not connected 00:24:14.121 [2024-10-08 17:40:05.917655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a7a20 (9): Bad file descriptor 00:24:14.121 [2024-10-08 17:40:05.918657] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.121 [2024-10-08 17:40:05.918665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:14.121 [2024-10-08 17:40:05.918671] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:14.121 [2024-10-08 17:40:05.918678] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.121 request: 00:24:14.121 { 00:24:14.121 "name": "TLSTEST", 00:24:14.121 "trtype": "tcp", 00:24:14.121 "traddr": "10.0.0.2", 00:24:14.121 "adrfam": "ipv4", 00:24:14.121 "trsvcid": "4420", 00:24:14.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.121 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:14.121 "prchk_reftag": false, 00:24:14.121 "prchk_guard": false, 00:24:14.121 "hdgst": false, 00:24:14.121 "ddgst": false, 00:24:14.121 "psk": "key0", 00:24:14.121 "allow_unrecognized_csi": false, 00:24:14.121 "method": "bdev_nvme_attach_controller", 00:24:14.121 "req_id": 1 00:24:14.121 } 00:24:14.121 Got JSON-RPC error response 00:24:14.121 response: 00:24:14.121 { 00:24:14.121 "code": -5, 00:24:14.121 "message": "Input/output error" 00:24:14.121 } 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 390721 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 390721 ']' 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 390721 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390721 00:24:14.121 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390721' 00:24:14.121 killing process with pid 390721 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 390721 00:24:14.121 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.121 00:24:14.121 Latency(us) 00:24:14.121 [2024-10-08T15:40:06.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.121 [2024-10-08T15:40:06.113Z] =================================================================================================================== 00:24:14.121 [2024-10-08T15:40:06.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 390721 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GFGlMms9ws 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GFGlMms9ws 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GFGlMms9ws 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GFGlMms9ws 00:24:14.121 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=390892 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 390892 /var/tmp/bdevperf.sock 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 390892 ']' 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.382 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.382 [2024-10-08 17:40:06.162809] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:14.382 [2024-10-08 17:40:06.162864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390892 ] 00:24:14.382 [2024-10-08 17:40:06.242869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.382 [2024-10-08 17:40:06.295235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.321 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.321 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.322 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GFGlMms9ws 00:24:15.322 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.322 [2024-10-08 17:40:07.297810] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.322 [2024-10-08 17:40:07.309251] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.322 [2024-10-08 17:40:07.309270] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.322 [2024-10-08 17:40:07.309290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:15.322 [2024-10-08 17:40:07.309899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910a20 (107): Transport endpoint is not connected 00:24:15.322 [2024-10-08 17:40:07.310895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910a20 (9): Bad file descriptor 00:24:15.322 [2024-10-08 17:40:07.311897] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.322 [2024-10-08 17:40:07.311904] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:15.322 [2024-10-08 17:40:07.311910] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:15.322 [2024-10-08 17:40:07.311917] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.582 request: 00:24:15.582 { 00:24:15.582 "name": "TLSTEST", 00:24:15.582 "trtype": "tcp", 00:24:15.582 "traddr": "10.0.0.2", 00:24:15.582 "adrfam": "ipv4", 00:24:15.582 "trsvcid": "4420", 00:24:15.582 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.582 "prchk_reftag": false, 00:24:15.582 "prchk_guard": false, 00:24:15.582 "hdgst": false, 00:24:15.582 "ddgst": false, 00:24:15.582 "psk": "key0", 00:24:15.582 "allow_unrecognized_csi": false, 00:24:15.582 "method": "bdev_nvme_attach_controller", 00:24:15.582 "req_id": 1 00:24:15.582 } 00:24:15.582 Got JSON-RPC error response 00:24:15.582 response: 00:24:15.582 { 00:24:15.582 "code": -5, 00:24:15.582 "message": "Input/output error" 00:24:15.582 } 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 390892 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 390892 ']' 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 390892 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390892 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390892' 00:24:15.582 killing process with pid 390892 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 390892 00:24:15.582 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.582 00:24:15.582 Latency(us) 00:24:15.582 [2024-10-08T15:40:07.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.582 [2024-10-08T15:40:07.574Z] =================================================================================================================== 00:24:15.582 [2024-10-08T15:40:07.574Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 390892 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.582 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=391098 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 391098 /var/tmp/bdevperf.sock 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 391098 ']' 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.583 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.583 [2024-10-08 17:40:07.572635] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:15.583 [2024-10-08 17:40:07.572691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391098 ] 00:24:15.843 [2024-10-08 17:40:07.650416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.843 [2024-10-08 17:40:07.702271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.414 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.414 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.414 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:16.674 [2024-10-08 17:40:08.512188] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:16.674 [2024-10-08 17:40:08.512208] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:16.674 request: 00:24:16.674 { 00:24:16.674 "name": "key0", 00:24:16.674 "path": "", 00:24:16.674 "method": "keyring_file_add_key", 00:24:16.674 "req_id": 1 00:24:16.674 } 00:24:16.674 Got JSON-RPC error response 00:24:16.674 response: 00:24:16.674 { 00:24:16.674 "code": -1, 00:24:16.674 "message": "Operation not permitted" 00:24:16.674 } 00:24:16.674 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:16.934 [2024-10-08 17:40:08.696720] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.934 [2024-10-08 17:40:08.696744] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:16.934 request: 00:24:16.934 { 00:24:16.934 "name": "TLSTEST", 00:24:16.934 "trtype": "tcp", 00:24:16.934 "traddr": "10.0.0.2", 00:24:16.934 "adrfam": "ipv4", 00:24:16.934 "trsvcid": "4420", 00:24:16.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.934 "prchk_reftag": false, 00:24:16.934 "prchk_guard": false, 00:24:16.934 "hdgst": false, 00:24:16.934 "ddgst": false, 00:24:16.934 "psk": "key0", 00:24:16.934 "allow_unrecognized_csi": false, 00:24:16.934 "method": "bdev_nvme_attach_controller", 00:24:16.934 "req_id": 1 00:24:16.935 } 00:24:16.935 Got JSON-RPC error response 00:24:16.935 response: 00:24:16.935 { 00:24:16.935 "code": -126, 00:24:16.935 "message": "Required key not available" 00:24:16.935 } 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 391098 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 391098 ']' 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 391098 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391098 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391098' 00:24:16.935 killing process with pid 391098 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 391098 00:24:16.935 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.935 00:24:16.935 Latency(us) 00:24:16.935 [2024-10-08T15:40:08.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.935 [2024-10-08T15:40:08.927Z] =================================================================================================================== 00:24:16.935 [2024-10-08T15:40:08.927Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 391098 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 385259 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 385259 ']' 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 385259 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.935 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 385259 00:24:17.196 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:17.196 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:17.197 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 385259' 00:24:17.197 killing process with pid 385259 00:24:17.197 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 385259 00:24:17.197 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 385259 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.OQboOdiAzi 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.OQboOdiAzi 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=391446 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 391446 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 391446 ']' 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.197 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.457 [2024-10-08 17:40:09.214142] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:17.457 [2024-10-08 17:40:09.214208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.457 [2024-10-08 17:40:09.302288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.457 [2024-10-08 17:40:09.362471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.457 [2024-10-08 17:40:09.362507] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.457 [2024-10-08 17:40:09.362514] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.457 [2024-10-08 17:40:09.362518] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.457 [2024-10-08 17:40:09.362522] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.457 [2024-10-08 17:40:09.363028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.027 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.027 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.027 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:18.027 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.027 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.287 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.287 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:18.287 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OQboOdiAzi 00:24:18.287 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.287 [2024-10-08 17:40:10.200162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.287 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:18.547 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:18.547 [2024-10-08 17:40:10.536993] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.547 [2024-10-08 17:40:10.537193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.808 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:18.808 malloc0 00:24:18.808 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:19.068 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:19.068 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OQboOdiAzi 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OQboOdiAzi 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=391903 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 391903 /var/tmp/bdevperf.sock 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 391903 ']' 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.328 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.329 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.329 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.329 [2024-10-08 17:40:11.240853] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:19.329 [2024-10-08 17:40:11.240910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391903 ] 00:24:19.329 [2024-10-08 17:40:11.315551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.589 [2024-10-08 17:40:11.368002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.160 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.160 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.160 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:20.420 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.420 [2024-10-08 17:40:12.378345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.680 TLSTESTn1 00:24:20.680 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.680 Running I/O for 10 seconds... 00:24:23.002 4878.00 IOPS, 19.05 MiB/s [2024-10-08T15:40:15.934Z] 4377.00 IOPS, 17.10 MiB/s [2024-10-08T15:40:16.874Z] 3956.67 IOPS, 15.46 MiB/s [2024-10-08T15:40:17.813Z] 3517.50 IOPS, 13.74 MiB/s [2024-10-08T15:40:18.753Z] 3634.00 IOPS, 14.20 MiB/s [2024-10-08T15:40:19.694Z] 3668.17 IOPS, 14.33 MiB/s [2024-10-08T15:40:20.634Z] 3410.86 IOPS, 13.32 MiB/s [2024-10-08T15:40:22.015Z] 3336.75 IOPS, 13.03 MiB/s [2024-10-08T15:40:22.956Z] 3602.22 IOPS, 14.07 MiB/s [2024-10-08T15:40:22.956Z] 3639.40 IOPS, 14.22 MiB/s 00:24:30.964 Latency(us) 00:24:30.964 [2024-10-08T15:40:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.964 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.964 Verification LBA range: start 0x0 length 0x2000 00:24:30.964 TLSTESTn1 : 10.01 3647.60 14.25 0.00 0.00 35054.53 5297.49 141557.76 00:24:30.964 [2024-10-08T15:40:22.956Z] =================================================================================================================== 00:24:30.964 [2024-10-08T15:40:22.956Z] Total : 3647.60 14.25 0.00 0.00 35054.53 5297.49 141557.76 00:24:30.964 { 00:24:30.964 "results": [ 00:24:30.964 { 00:24:30.964 "job": "TLSTESTn1", 00:24:30.964 "core_mask": "0x4", 00:24:30.964 "workload": "verify", 00:24:30.964 "status": "finished", 00:24:30.964 "verify_range": { 00:24:30.964 "start": 0, 00:24:30.964 "length": 8192 00:24:30.964 }, 00:24:30.964 "queue_depth": 128, 00:24:30.964 "io_size": 4096, 00:24:30.964 "runtime": 10.012335, 00:24:30.964 "iops": 3647.6006845556008, 00:24:30.964 "mibps": 14.248440174045315, 00:24:30.964 "io_failed": 0, 00:24:30.964 "io_timeout": 0, 00:24:30.964 "avg_latency_us": 35054.527520422045, 00:24:30.964 "min_latency_us": 5297.493333333333, 00:24:30.964 "max_latency_us": 141557.76 00:24:30.964 } 00:24:30.964 ], 00:24:30.964 "core_count": 1 00:24:30.964 } 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 391903 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 391903 ']' 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 391903 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391903 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391903' 00:24:30.964 killing process with pid 391903 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 391903 00:24:30.964 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.964 00:24:30.964 Latency(us) 00:24:30.964 [2024-10-08T15:40:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.964 [2024-10-08T15:40:22.956Z] =================================================================================================================== 00:24:30.964 [2024-10-08T15:40:22.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 391903 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.OQboOdiAzi 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OQboOdiAzi 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OQboOdiAzi 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OQboOdiAzi 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OQboOdiAzi 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=394149 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 394149 /var/tmp/bdevperf.sock 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 394149 ']' 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.964 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.964 [2024-10-08 17:40:22.877355] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:30.964 [2024-10-08 17:40:22.877412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394149 ] 00:24:30.964 [2024-10-08 17:40:22.953316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.224 [2024-10-08 17:40:23.004484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.796 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.796 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:31.796 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:32.057 [2024-10-08 17:40:23.814316] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OQboOdiAzi': 0100666 00:24:32.057 [2024-10-08 17:40:23.814341] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:32.057 request: 00:24:32.057 { 00:24:32.057 "name": "key0", 00:24:32.057 "path": "/tmp/tmp.OQboOdiAzi", 00:24:32.057 "method": "keyring_file_add_key", 00:24:32.057 "req_id": 1 00:24:32.057 } 00:24:32.057 Got JSON-RPC error response 00:24:32.057 response: 00:24:32.057 { 00:24:32.057 "code": -1, 00:24:32.057 "message": "Operation not permitted" 00:24:32.057 } 00:24:32.057 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.057 [2024-10-08 17:40:23.982811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.057 [2024-10-08 17:40:23.982837] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:32.057 request: 00:24:32.057 { 00:24:32.057 "name": "TLSTEST", 00:24:32.057 "trtype": "tcp", 00:24:32.057 "traddr": "10.0.0.2", 00:24:32.057 "adrfam": "ipv4", 00:24:32.057 "trsvcid": "4420", 00:24:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.057 "prchk_reftag": false, 00:24:32.057 "prchk_guard": false, 00:24:32.057 "hdgst": false, 00:24:32.057 "ddgst": false, 00:24:32.057 "psk": "key0", 00:24:32.057 "allow_unrecognized_csi": false, 00:24:32.057 "method": "bdev_nvme_attach_controller", 00:24:32.057 "req_id": 1 00:24:32.057 } 00:24:32.057 Got JSON-RPC error response 00:24:32.057 response: 00:24:32.057 { 00:24:32.057 "code": -126, 00:24:32.057 "message": "Required key not available" 00:24:32.057 } 00:24:32.057 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 394149 00:24:32.057 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 394149 ']' 00:24:32.057 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 394149 00:24:32.057 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.057 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.057 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 394149 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 394149' 00:24:32.318 killing process with pid 394149 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 394149 00:24:32.318 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.318 00:24:32.318 Latency(us) 00:24:32.318 [2024-10-08T15:40:24.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.318 [2024-10-08T15:40:24.310Z] =================================================================================================================== 00:24:32.318 [2024-10-08T15:40:24.310Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 394149 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 391446 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 391446 ']' 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 391446 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391446 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391446' 00:24:32.318 killing process with pid 391446 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 391446 00:24:32.318 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 391446 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=394500 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 394500 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 394500 ']' 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.578 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.578 [2024-10-08 17:40:24.431633] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:32.578 [2024-10-08 17:40:24.431694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.578 [2024-10-08 17:40:24.514666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.578 [2024-10-08 17:40:24.568655] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.578 [2024-10-08 17:40:24.568688] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.578 [2024-10-08 17:40:24.568697] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.578 [2024-10-08 17:40:24.568701] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.578 [2024-10-08 17:40:24.568705] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.578 [2024-10-08 17:40:24.569183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OQboOdiAzi 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.519 [2024-10-08 17:40:25.408531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.519 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:33.779 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:33.779 [2024-10-08 17:40:25.765404] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:33.780 [2024-10-08 17:40:25.765594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.040 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:34.040 malloc0 00:24:34.040 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:34.301 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:34.561 [2024-10-08 17:40:26.310125] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OQboOdiAzi': 0100666 00:24:34.561 [2024-10-08 17:40:26.310145] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:34.561 request: 00:24:34.561 { 00:24:34.561 "name": "key0", 00:24:34.561 "path": "/tmp/tmp.OQboOdiAzi", 00:24:34.561 "method": "keyring_file_add_key", 00:24:34.561 "req_id": 1 00:24:34.561 } 00:24:34.561 Got JSON-RPC error response 00:24:34.561 response: 00:24:34.561 { 00:24:34.561 "code": -1, 00:24:34.561 "message": "Operation not permitted" 00:24:34.561 } 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.561 [2024-10-08 17:40:26.486589] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:34.561 [2024-10-08 17:40:26.486615] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:34.561 request: 00:24:34.561 { 00:24:34.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.561 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.561 "psk": "key0", 00:24:34.561 "method": "nvmf_subsystem_add_host", 00:24:34.561 "req_id": 1 00:24:34.561 } 00:24:34.561 Got JSON-RPC error response 00:24:34.561 response: 00:24:34.561 { 00:24:34.561 "code": -32603, 00:24:34.561 "message": "Internal error" 00:24:34.561 } 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 394500 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 394500 ']' 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 394500 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.561 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 394500 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 394500' 00:24:34.822 killing process with pid 394500 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 394500 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 394500 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.OQboOdiAzi 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=394926 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 394926 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 394926 ']' 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.822 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.823 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.823 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.823 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.823 [2024-10-08 17:40:26.775413] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:34.823 [2024-10-08 17:40:26.775467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.083 [2024-10-08 17:40:26.861534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.083 [2024-10-08 17:40:26.921692] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.083 [2024-10-08 17:40:26.921729] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.083 [2024-10-08 17:40:26.921735] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.083 [2024-10-08 17:40:26.921740] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.083 [2024-10-08 17:40:26.921745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.083 [2024-10-08 17:40:26.922271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OQboOdiAzi 00:24:35.653 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:35.913 [2024-10-08 17:40:27.767670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.913 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:36.175 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:36.175 [2024-10-08 17:40:28.124550] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.175 [2024-10-08 17:40:28.124726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.175 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:36.435 malloc0 00:24:36.435 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:36.695 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=395501 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 395501 /var/tmp/bdevperf.sock 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 395501 ']' 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.956 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.956 [2024-10-08 17:40:28.922292] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:36.956 [2024-10-08 17:40:28.922345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395501 ] 00:24:37.216 [2024-10-08 17:40:28.998909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.216 [2024-10-08 17:40:29.061549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.785 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.785 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:37.785 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:38.046 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.307 [2024-10-08 17:40:30.056986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.307 TLSTESTn1 00:24:38.307 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:38.568 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:38.568 "subsystems": [ 00:24:38.568 { 00:24:38.568 "subsystem": "keyring", 00:24:38.568 "config": [ 00:24:38.568 { 00:24:38.568 "method": "keyring_file_add_key", 00:24:38.568 "params": { 00:24:38.568 "name": "key0", 00:24:38.568 "path": "/tmp/tmp.OQboOdiAzi" 00:24:38.568 } 00:24:38.568 } 00:24:38.568 ] 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "subsystem": "iobuf", 00:24:38.568 "config": [ 00:24:38.568 { 00:24:38.568 "method": "iobuf_set_options", 00:24:38.568 "params": { 00:24:38.568 "small_pool_count": 8192, 00:24:38.568 "large_pool_count": 1024, 00:24:38.568 "small_bufsize": 8192, 00:24:38.568 "large_bufsize": 135168 00:24:38.568 } 00:24:38.568 } 00:24:38.568 ] 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "subsystem": "sock", 00:24:38.568 "config": [ 00:24:38.568 { 00:24:38.568 "method": "sock_set_default_impl", 00:24:38.568 "params": { 00:24:38.568 "impl_name": "posix" 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "sock_impl_set_options", 00:24:38.568 "params": { 00:24:38.568 "impl_name": "ssl", 00:24:38.568 "recv_buf_size": 4096, 00:24:38.568 "send_buf_size": 4096, 00:24:38.568 "enable_recv_pipe": true, 00:24:38.568 "enable_quickack": false, 00:24:38.568 "enable_placement_id": 0, 00:24:38.568 "enable_zerocopy_send_server": true, 00:24:38.568 "enable_zerocopy_send_client": false, 00:24:38.568 "zerocopy_threshold": 0, 00:24:38.568 "tls_version": 0, 00:24:38.568 "enable_ktls": false 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "sock_impl_set_options", 00:24:38.568 "params": { 00:24:38.568 "impl_name": "posix", 00:24:38.568 "recv_buf_size": 2097152, 00:24:38.568 "send_buf_size": 2097152, 00:24:38.568 "enable_recv_pipe": true, 00:24:38.568 "enable_quickack": false, 00:24:38.568 "enable_placement_id": 0, 00:24:38.568 "enable_zerocopy_send_server": true, 00:24:38.568 "enable_zerocopy_send_client": false, 00:24:38.568 "zerocopy_threshold": 0, 00:24:38.568 "tls_version": 0, 00:24:38.568 "enable_ktls": false 00:24:38.568 } 00:24:38.568 } 00:24:38.568 ] 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "subsystem": "vmd", 00:24:38.568 "config": [] 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "subsystem": "accel", 00:24:38.568 "config": [ 00:24:38.568 { 00:24:38.568 "method": "accel_set_options", 00:24:38.568 "params": { 00:24:38.568 "small_cache_size": 128, 00:24:38.568 "large_cache_size": 16, 00:24:38.568 "task_count": 2048, 00:24:38.568 "sequence_count": 2048, 00:24:38.568 "buf_count": 2048 00:24:38.568 } 00:24:38.568 } 00:24:38.568 ] 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "subsystem": "bdev", 00:24:38.568 "config": [ 00:24:38.568 { 00:24:38.568 "method": "bdev_set_options", 00:24:38.568 "params": { 00:24:38.568 "bdev_io_pool_size": 65535, 00:24:38.568 "bdev_io_cache_size": 256, 00:24:38.568 "bdev_auto_examine": true, 00:24:38.568 "iobuf_small_cache_size": 128, 00:24:38.568 "iobuf_large_cache_size": 16 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "bdev_raid_set_options", 00:24:38.568 "params": { 00:24:38.568 "process_window_size_kb": 1024, 00:24:38.568 "process_max_bandwidth_mb_sec": 0 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "bdev_iscsi_set_options", 00:24:38.568 "params": { 00:24:38.568 "timeout_sec": 30 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "bdev_nvme_set_options", 00:24:38.568 "params": { 00:24:38.568 "action_on_timeout": "none", 00:24:38.568 "timeout_us": 0, 00:24:38.568 "timeout_admin_us": 0, 00:24:38.568 "keep_alive_timeout_ms": 10000, 00:24:38.568 "arbitration_burst": 0, 00:24:38.568 "low_priority_weight": 0, 00:24:38.568 "medium_priority_weight": 0, 00:24:38.568 "high_priority_weight": 0, 00:24:38.568 "nvme_adminq_poll_period_us": 10000, 00:24:38.568 "nvme_ioq_poll_period_us": 0, 00:24:38.568 "io_queue_requests": 0, 00:24:38.568 "delay_cmd_submit": true, 00:24:38.568 "transport_retry_count": 4, 00:24:38.568 "bdev_retry_count": 3, 00:24:38.568 "transport_ack_timeout": 0, 00:24:38.568 "ctrlr_loss_timeout_sec": 0, 00:24:38.568 "reconnect_delay_sec": 0, 00:24:38.568 "fast_io_fail_timeout_sec": 0, 00:24:38.568 "disable_auto_failback": false, 00:24:38.568 "generate_uuids": false, 00:24:38.568 "transport_tos": 0, 00:24:38.568 "nvme_error_stat": false, 00:24:38.568 "rdma_srq_size": 0, 00:24:38.568 "io_path_stat": false, 00:24:38.568 "allow_accel_sequence": false, 00:24:38.568 "rdma_max_cq_size": 0, 00:24:38.568 "rdma_cm_event_timeout_ms": 0, 00:24:38.568 "dhchap_digests": [ 00:24:38.568 "sha256", 00:24:38.568 "sha384", 00:24:38.568 "sha512" 00:24:38.568 ], 00:24:38.568 "dhchap_dhgroups": [ 00:24:38.568 "null", 00:24:38.568 "ffdhe2048", 00:24:38.568 "ffdhe3072", 00:24:38.568 "ffdhe4096", 00:24:38.568 "ffdhe6144", 00:24:38.568 "ffdhe8192" 00:24:38.568 ] 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "bdev_nvme_set_hotplug", 00:24:38.568 "params": { 00:24:38.568 "period_us": 100000, 00:24:38.568 "enable": false 00:24:38.568 } 00:24:38.568 }, 00:24:38.568 { 00:24:38.568 "method": "bdev_malloc_create", 00:24:38.568 "params": { 00:24:38.568 "name": "malloc0", 00:24:38.569 "num_blocks": 8192, 00:24:38.569 "block_size": 4096, 00:24:38.569 "physical_block_size": 4096, 00:24:38.569 "uuid": "f5170307-618e-4d66-b417-aedfe808356e", 00:24:38.569 "optimal_io_boundary": 0, 00:24:38.569 "md_size": 0, 00:24:38.569 "dif_type": 0, 00:24:38.569 "dif_is_head_of_md": false, 00:24:38.569 "dif_pi_format": 0 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "bdev_wait_for_examine" 00:24:38.569 } 00:24:38.569 ] 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "subsystem": "nbd", 00:24:38.569 "config": [] 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "subsystem": "scheduler", 00:24:38.569 "config": [ 00:24:38.569 { 00:24:38.569 "method": "framework_set_scheduler", 00:24:38.569 "params": { 00:24:38.569 "name": "static" 00:24:38.569 } 00:24:38.569 } 00:24:38.569 ] 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "subsystem": "nvmf", 00:24:38.569 "config": [ 00:24:38.569 { 00:24:38.569 "method": "nvmf_set_config", 00:24:38.569 "params": { 00:24:38.569 "discovery_filter": "match_any", 00:24:38.569 "admin_cmd_passthru": { 00:24:38.569 "identify_ctrlr": false 00:24:38.569 }, 00:24:38.569 "dhchap_digests": [ 00:24:38.569 "sha256", 00:24:38.569 "sha384", 00:24:38.569 "sha512" 00:24:38.569 ], 00:24:38.569 "dhchap_dhgroups": [ 00:24:38.569 "null", 00:24:38.569 "ffdhe2048", 00:24:38.569 "ffdhe3072", 00:24:38.569 "ffdhe4096", 00:24:38.569 "ffdhe6144", 00:24:38.569 "ffdhe8192" 00:24:38.569 ] 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_set_max_subsystems", 00:24:38.569 "params": { 00:24:38.569 "max_subsystems": 1024 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_set_crdt", 00:24:38.569 "params": { 00:24:38.569 "crdt1": 0, 00:24:38.569 "crdt2": 0, 00:24:38.569 "crdt3": 0 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_create_transport", 00:24:38.569 "params": { 00:24:38.569 "trtype": "TCP", 00:24:38.569 "max_queue_depth": 128, 00:24:38.569 "max_io_qpairs_per_ctrlr": 127, 00:24:38.569 "in_capsule_data_size": 4096, 00:24:38.569 "max_io_size": 131072, 00:24:38.569 "io_unit_size": 131072, 00:24:38.569 "max_aq_depth": 128, 00:24:38.569 "num_shared_buffers": 511, 00:24:38.569 "buf_cache_size": 4294967295, 00:24:38.569 "dif_insert_or_strip": false, 00:24:38.569 "zcopy": false, 00:24:38.569 "c2h_success": false, 00:24:38.569 "sock_priority": 0, 00:24:38.569 "abort_timeout_sec": 1, 00:24:38.569 "ack_timeout": 0, 00:24:38.569 "data_wr_pool_size": 0 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_create_subsystem", 00:24:38.569 "params": { 00:24:38.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.569 "allow_any_host": false, 00:24:38.569 "serial_number": "SPDK00000000000001", 00:24:38.569 "model_number": "SPDK bdev Controller", 00:24:38.569 "max_namespaces": 10, 00:24:38.569 "min_cntlid": 1, 00:24:38.569 "max_cntlid": 65519, 00:24:38.569 "ana_reporting": false 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_subsystem_add_host", 00:24:38.569 "params": { 00:24:38.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.569 "host": "nqn.2016-06.io.spdk:host1", 00:24:38.569 "psk": "key0" 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_subsystem_add_ns", 00:24:38.569 "params": { 00:24:38.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.569 "namespace": { 00:24:38.569 "nsid": 1, 00:24:38.569 "bdev_name": "malloc0", 00:24:38.569 "nguid": "F5170307618E4D66B417AEDFE808356E", 00:24:38.569 "uuid": "f5170307-618e-4d66-b417-aedfe808356e", 00:24:38.569 "no_auto_visible": false 00:24:38.569 } 00:24:38.569 } 00:24:38.569 }, 00:24:38.569 { 00:24:38.569 "method": "nvmf_subsystem_add_listener", 00:24:38.569 "params": { 00:24:38.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.569 "listen_address": { 00:24:38.569 "trtype": "TCP", 00:24:38.569 "adrfam": "IPv4", 00:24:38.569 "traddr": "10.0.0.2", 00:24:38.569 "trsvcid": "4420" 00:24:38.569 }, 00:24:38.569 "secure_channel": true 00:24:38.569 } 00:24:38.569 } 00:24:38.569 ] 00:24:38.569 } 00:24:38.569 ] 00:24:38.569 }' 00:24:38.569 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:38.830 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:38.830 "subsystems": [ 00:24:38.830 { 00:24:38.830 "subsystem": "keyring", 00:24:38.830 "config": [ 00:24:38.830 { 00:24:38.830 "method": "keyring_file_add_key", 00:24:38.830 "params": { 00:24:38.830 "name": "key0", 00:24:38.830 "path": "/tmp/tmp.OQboOdiAzi" 00:24:38.830 } 00:24:38.830 } 00:24:38.830 ] 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "subsystem": "iobuf", 00:24:38.830 "config": [ 00:24:38.830 { 00:24:38.830 "method": "iobuf_set_options", 00:24:38.830 "params": { 00:24:38.830 "small_pool_count": 8192, 00:24:38.830 "large_pool_count": 1024, 00:24:38.830 "small_bufsize": 8192, 00:24:38.830 "large_bufsize": 135168 00:24:38.830 } 00:24:38.830 } 00:24:38.830 ] 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "subsystem": "sock", 00:24:38.830 "config": [ 00:24:38.830 { 00:24:38.830 "method": "sock_set_default_impl", 00:24:38.830 "params": { 00:24:38.830 "impl_name": "posix" 00:24:38.830 } 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "method": "sock_impl_set_options", 00:24:38.830 "params": { 00:24:38.830 "impl_name": "ssl", 00:24:38.830 "recv_buf_size": 4096, 00:24:38.830 "send_buf_size": 4096, 00:24:38.830 "enable_recv_pipe": true, 00:24:38.830 "enable_quickack": false, 00:24:38.830 "enable_placement_id": 0, 00:24:38.830 "enable_zerocopy_send_server": true, 00:24:38.830 "enable_zerocopy_send_client": false, 00:24:38.830 "zerocopy_threshold": 0, 00:24:38.830 "tls_version": 0, 00:24:38.830 "enable_ktls": false 00:24:38.830 } 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "method": "sock_impl_set_options", 00:24:38.830 "params": { 00:24:38.830 "impl_name": "posix", 00:24:38.830 "recv_buf_size": 2097152, 00:24:38.830 "send_buf_size": 2097152, 00:24:38.830 "enable_recv_pipe": true, 00:24:38.830 "enable_quickack": false, 00:24:38.830 "enable_placement_id": 0, 00:24:38.830 "enable_zerocopy_send_server": true, 00:24:38.830 "enable_zerocopy_send_client": false, 00:24:38.830 "zerocopy_threshold": 0, 00:24:38.830 "tls_version": 0, 00:24:38.830 "enable_ktls": false 00:24:38.830 } 00:24:38.830 } 00:24:38.830 ] 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "subsystem": "vmd", 00:24:38.830 "config": [] 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "subsystem": "accel", 00:24:38.830 "config": [ 00:24:38.830 { 00:24:38.830 "method": "accel_set_options", 00:24:38.830 "params": { 00:24:38.830 "small_cache_size": 128, 00:24:38.830 "large_cache_size": 16, 00:24:38.830 "task_count": 2048, 00:24:38.830 "sequence_count": 2048, 00:24:38.830 "buf_count": 2048 00:24:38.830 } 00:24:38.830 } 00:24:38.830 ] 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "subsystem": "bdev", 00:24:38.830 "config": [ 00:24:38.830 { 00:24:38.830 "method": "bdev_set_options", 00:24:38.830 "params": { 00:24:38.830 "bdev_io_pool_size": 65535, 00:24:38.830 "bdev_io_cache_size": 256, 00:24:38.830 "bdev_auto_examine": true, 00:24:38.830 "iobuf_small_cache_size": 128, 00:24:38.830 "iobuf_large_cache_size": 16 00:24:38.830 } 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "method": "bdev_raid_set_options", 00:24:38.830 "params": { 00:24:38.830 "process_window_size_kb": 1024, 00:24:38.830 "process_max_bandwidth_mb_sec": 0 00:24:38.830 } 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "method": "bdev_iscsi_set_options", 00:24:38.830 "params": { 00:24:38.830 "timeout_sec": 30 00:24:38.830 } 00:24:38.830 }, 00:24:38.830 { 00:24:38.830 "method": "bdev_nvme_set_options", 00:24:38.830 "params": { 00:24:38.830 "action_on_timeout": "none", 00:24:38.830 "timeout_us": 0, 00:24:38.830 "timeout_admin_us": 0, 00:24:38.830 "keep_alive_timeout_ms": 10000, 00:24:38.830 "arbitration_burst": 0, 00:24:38.830 "low_priority_weight": 0, 00:24:38.830 "medium_priority_weight": 0, 00:24:38.830 "high_priority_weight": 0, 00:24:38.830 "nvme_adminq_poll_period_us": 10000, 00:24:38.831 "nvme_ioq_poll_period_us": 0, 00:24:38.831 "io_queue_requests": 512, 00:24:38.831 "delay_cmd_submit": true, 00:24:38.831 "transport_retry_count": 4, 00:24:38.831 "bdev_retry_count": 3, 00:24:38.831 "transport_ack_timeout": 0, 00:24:38.831 "ctrlr_loss_timeout_sec": 0, 00:24:38.831 "reconnect_delay_sec": 0, 00:24:38.831 "fast_io_fail_timeout_sec": 0, 00:24:38.831 "disable_auto_failback": false, 00:24:38.831 "generate_uuids": false, 00:24:38.831 "transport_tos": 0, 00:24:38.831 "nvme_error_stat": false, 00:24:38.831 "rdma_srq_size": 0, 00:24:38.831 "io_path_stat": false, 00:24:38.831 "allow_accel_sequence": false, 00:24:38.831 "rdma_max_cq_size": 0, 00:24:38.831 "rdma_cm_event_timeout_ms": 0, 00:24:38.831 "dhchap_digests": [ 00:24:38.831 "sha256", 00:24:38.831 "sha384", 00:24:38.831 "sha512" 00:24:38.831 ], 00:24:38.831 "dhchap_dhgroups": [ 00:24:38.831 "null", 00:24:38.831 "ffdhe2048", 00:24:38.831 "ffdhe3072", 00:24:38.831 "ffdhe4096", 00:24:38.831 "ffdhe6144", 00:24:38.831 "ffdhe8192" 00:24:38.831 ] 00:24:38.831 } 00:24:38.831 }, 00:24:38.831 { 00:24:38.831 "method": "bdev_nvme_attach_controller", 00:24:38.831 "params": { 00:24:38.831 "name": "TLSTEST", 00:24:38.831 "trtype": "TCP", 00:24:38.831 "adrfam": "IPv4", 00:24:38.831 "traddr": "10.0.0.2", 00:24:38.831 "trsvcid": "4420", 00:24:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.831 "prchk_reftag": false, 00:24:38.831 "prchk_guard": false, 00:24:38.831 "ctrlr_loss_timeout_sec": 0, 00:24:38.831 "reconnect_delay_sec": 0, 00:24:38.831 "fast_io_fail_timeout_sec": 0, 00:24:38.831 "psk": "key0", 00:24:38.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.831 "hdgst": false, 00:24:38.831 "ddgst": false, 00:24:38.831 "multipath": "multipath" 00:24:38.831 } 00:24:38.831 }, 00:24:38.831 { 00:24:38.831 "method": "bdev_nvme_set_hotplug", 00:24:38.831 "params": { 00:24:38.831 "period_us": 100000, 00:24:38.831 "enable": false 00:24:38.831 } 00:24:38.831 }, 00:24:38.831 { 00:24:38.831 "method": "bdev_wait_for_examine" 00:24:38.831 } 00:24:38.831 ] 00:24:38.831 }, 00:24:38.831 { 00:24:38.831 "subsystem": "nbd", 00:24:38.831 "config": [] 00:24:38.831 } 00:24:38.831 ] 00:24:38.831 }' 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 395501 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 395501 ']' 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 395501 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395501 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395501' 00:24:38.831 killing process with pid 395501 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 395501 00:24:38.831 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.831 00:24:38.831 Latency(us) 00:24:38.831 [2024-10-08T15:40:30.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.831 [2024-10-08T15:40:30.823Z] =================================================================================================================== 00:24:38.831 [2024-10-08T15:40:30.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:38.831 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 395501 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 394926 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 394926 ']' 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 394926 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 394926 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 394926' 00:24:39.092 killing process with pid 394926 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 394926 00:24:39.092 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 394926 00:24:39.092 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:39.092 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:39.092 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.092 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.092 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:39.092 "subsystems": [ 00:24:39.092 { 00:24:39.092 "subsystem": "keyring", 00:24:39.092 "config": [ 00:24:39.092 { 00:24:39.092 "method": "keyring_file_add_key", 00:24:39.092 "params": { 00:24:39.092 "name": "key0", 00:24:39.092 "path": "/tmp/tmp.OQboOdiAzi" 00:24:39.092 } 00:24:39.092 } 00:24:39.092 ] 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "subsystem": "iobuf", 00:24:39.092 "config": [ 00:24:39.092 { 00:24:39.092 "method": "iobuf_set_options", 00:24:39.092 "params": { 00:24:39.092 "small_pool_count": 8192, 00:24:39.092 "large_pool_count": 1024, 00:24:39.092 "small_bufsize": 8192, 00:24:39.092 "large_bufsize": 135168 00:24:39.092 } 00:24:39.092 } 00:24:39.092 ] 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "subsystem": "sock", 00:24:39.092 "config": [ 00:24:39.092 { 00:24:39.092 "method": "sock_set_default_impl", 00:24:39.092 "params": { 00:24:39.092 "impl_name": "posix" 00:24:39.092 } 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "method": "sock_impl_set_options", 00:24:39.092 "params": { 00:24:39.092 "impl_name": "ssl", 00:24:39.092 "recv_buf_size": 4096, 00:24:39.092 "send_buf_size": 4096, 00:24:39.092 "enable_recv_pipe": true, 00:24:39.092 "enable_quickack": false, 00:24:39.092 "enable_placement_id": 0, 00:24:39.092 "enable_zerocopy_send_server": true, 00:24:39.092 "enable_zerocopy_send_client": false, 00:24:39.092 "zerocopy_threshold": 0, 00:24:39.092 "tls_version": 0, 00:24:39.092 "enable_ktls": false 00:24:39.092 } 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "method": "sock_impl_set_options", 00:24:39.092 "params": { 00:24:39.092 "impl_name": "posix", 00:24:39.092 "recv_buf_size": 2097152, 00:24:39.092 "send_buf_size": 2097152, 00:24:39.092 "enable_recv_pipe": true, 00:24:39.092 "enable_quickack": false, 00:24:39.092 "enable_placement_id": 0, 00:24:39.092 "enable_zerocopy_send_server": true, 00:24:39.092 "enable_zerocopy_send_client": false, 00:24:39.092 "zerocopy_threshold": 0, 00:24:39.092 "tls_version": 0, 00:24:39.092 "enable_ktls": false 00:24:39.092 } 00:24:39.092 } 00:24:39.092 ] 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "subsystem": "vmd", 00:24:39.092 "config": [] 00:24:39.092 }, 00:24:39.092 { 00:24:39.092 "subsystem": "accel", 00:24:39.092 "config": [ 00:24:39.092 { 00:24:39.092 "method": "accel_set_options", 00:24:39.092 "params": { 00:24:39.092 "small_cache_size": 128, 00:24:39.092 "large_cache_size": 16, 00:24:39.092 "task_count": 2048, 00:24:39.092 "sequence_count": 2048, 00:24:39.092 "buf_count": 2048 00:24:39.092 } 00:24:39.093 } 00:24:39.093 ] 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "subsystem": "bdev", 00:24:39.093 "config": [ 00:24:39.093 { 00:24:39.093 "method": "bdev_set_options", 00:24:39.093 "params": { 00:24:39.093 "bdev_io_pool_size": 65535, 00:24:39.093 "bdev_io_cache_size": 256, 00:24:39.093 "bdev_auto_examine": true, 00:24:39.093 "iobuf_small_cache_size": 128, 00:24:39.093 "iobuf_large_cache_size": 16 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_raid_set_options", 00:24:39.093 "params": { 00:24:39.093 "process_window_size_kb": 1024, 00:24:39.093 "process_max_bandwidth_mb_sec": 0 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_iscsi_set_options", 00:24:39.093 "params": { 00:24:39.093 "timeout_sec": 30 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_nvme_set_options", 00:24:39.093 "params": { 00:24:39.093 "action_on_timeout": "none", 00:24:39.093 "timeout_us": 0, 00:24:39.093 "timeout_admin_us": 0, 00:24:39.093 "keep_alive_timeout_ms": 10000, 00:24:39.093 "arbitration_burst": 0, 00:24:39.093 "low_priority_weight": 0, 00:24:39.093 "medium_priority_weight": 0, 00:24:39.093 "high_priority_weight": 0, 00:24:39.093 "nvme_adminq_poll_period_us": 10000, 00:24:39.093 "nvme_ioq_poll_period_us": 0, 00:24:39.093 "io_queue_requests": 0, 00:24:39.093 "delay_cmd_submit": true, 00:24:39.093 "transport_retry_count": 4, 00:24:39.093 "bdev_retry_count": 3, 00:24:39.093 "transport_ack_timeout": 0, 00:24:39.093 "ctrlr_loss_timeout_sec": 0, 00:24:39.093 "reconnect_delay_sec": 0, 00:24:39.093 "fast_io_fail_timeout_sec": 0, 00:24:39.093 "disable_auto_failback": false, 00:24:39.093 "generate_uuids": false, 00:24:39.093 "transport_tos": 0, 00:24:39.093 "nvme_error_stat": false, 00:24:39.093 "rdma_srq_size": 0, 00:24:39.093 "io_path_stat": false, 00:24:39.093 "allow_accel_sequence": false, 00:24:39.093 "rdma_max_cq_size": 0, 00:24:39.093 "rdma_cm_event_timeout_ms": 0, 00:24:39.093 "dhchap_digests": [ 00:24:39.093 "sha256", 00:24:39.093 "sha384", 00:24:39.093 "sha512" 00:24:39.093 ], 00:24:39.093 "dhchap_dhgroups": [ 00:24:39.093 "null", 00:24:39.093 "ffdhe2048", 00:24:39.093 "ffdhe3072", 00:24:39.093 "ffdhe4096", 00:24:39.093 "ffdhe6144", 00:24:39.093 "ffdhe8192" 00:24:39.093 ] 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_nvme_set_hotplug", 00:24:39.093 "params": { 00:24:39.093 "period_us": 100000, 00:24:39.093 "enable": false 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_malloc_create", 00:24:39.093 "params": { 00:24:39.093 "name": "malloc0", 00:24:39.093 "num_blocks": 8192, 00:24:39.093 "block_size": 4096, 00:24:39.093 "physical_block_size": 4096, 00:24:39.093 "uuid": "f5170307-618e-4d66-b417-aedfe808356e", 00:24:39.093 "optimal_io_boundary": 0, 00:24:39.093 "md_size": 0, 00:24:39.093 "dif_type": 0, 00:24:39.093 "dif_is_head_of_md": false, 00:24:39.093 "dif_pi_format": 0 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "bdev_wait_for_examine" 00:24:39.093 } 00:24:39.093 ] 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "subsystem": "nbd", 00:24:39.093 "config": [] 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "subsystem": "scheduler", 00:24:39.093 "config": [ 00:24:39.093 { 00:24:39.093 "method": "framework_set_scheduler", 00:24:39.093 "params": { 00:24:39.093 "name": "static" 00:24:39.093 } 00:24:39.093 } 00:24:39.093 ] 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "subsystem": "nvmf", 00:24:39.093 "config": [ 00:24:39.093 { 00:24:39.093 "method": "nvmf_set_config", 00:24:39.093 "params": { 00:24:39.093 "discovery_filter": "match_any", 00:24:39.093 "admin_cmd_passthru": { 00:24:39.093 "identify_ctrlr": false 00:24:39.093 }, 00:24:39.093 "dhchap_digests": [ 00:24:39.093 "sha256", 00:24:39.093 "sha384", 00:24:39.093 "sha512" 00:24:39.093 ], 00:24:39.093 "dhchap_dhgroups": [ 00:24:39.093 "null", 00:24:39.093 "ffdhe2048", 00:24:39.093 "ffdhe3072", 00:24:39.093 "ffdhe4096", 00:24:39.093 "ffdhe6144", 00:24:39.093 "ffdhe8192" 00:24:39.093 ] 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_set_max_subsystems", 00:24:39.093 "params": { 00:24:39.093 "max_subsystems": 1024 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_set_crdt", 00:24:39.093 "params": { 00:24:39.093 "crdt1": 0, 00:24:39.093 "crdt2": 0, 00:24:39.093 "crdt3": 0 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_create_transport", 00:24:39.093 "params": { 00:24:39.093 "trtype": "TCP", 00:24:39.093 "max_queue_depth": 128, 00:24:39.093 "max_io_qpairs_per_ctrlr": 127, 00:24:39.093 "in_capsule_data_size": 4096, 00:24:39.093 "max_io_size": 131072, 00:24:39.093 "io_unit_size": 131072, 00:24:39.093 "max_aq_depth": 128, 00:24:39.093 "num_shared_buffers": 511, 00:24:39.093 "buf_cache_size": 4294967295, 00:24:39.093 "dif_insert_or_strip": false, 00:24:39.093 "zcopy": false, 00:24:39.093 "c2h_success": false, 00:24:39.093 "sock_priority": 0, 00:24:39.093 "abort_timeout_sec": 1, 00:24:39.093 "ack_timeout": 0, 00:24:39.093 "data_wr_pool_size": 0 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_create_subsystem", 00:24:39.093 "params": { 00:24:39.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.093 "allow_any_host": false, 00:24:39.093 "serial_number": "SPDK00000000000001", 00:24:39.093 "model_number": "SPDK bdev Controller", 00:24:39.093 "max_namespaces": 10, 00:24:39.093 "min_cntlid": 1, 00:24:39.093 "max_cntlid": 65519, 00:24:39.093 "ana_reporting": false 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_subsystem_add_host", 00:24:39.093 "params": { 00:24:39.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.093 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.093 "psk": "key0" 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_subsystem_add_ns", 00:24:39.093 "params": { 00:24:39.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.093 "namespace": { 00:24:39.093 "nsid": 1, 00:24:39.093 "bdev_name": "malloc0", 00:24:39.093 "nguid": "F5170307618E4D66B417AEDFE808356E", 00:24:39.093 "uuid": "f5170307-618e-4d66-b417-aedfe808356e", 00:24:39.093 "no_auto_visible": false 00:24:39.093 } 00:24:39.093 } 00:24:39.093 }, 00:24:39.093 { 00:24:39.093 "method": "nvmf_subsystem_add_listener", 00:24:39.093 "params": { 00:24:39.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.093 "listen_address": { 00:24:39.093 "trtype": "TCP", 00:24:39.093 "adrfam": "IPv4", 00:24:39.093 "traddr": "10.0.0.2", 00:24:39.093 "trsvcid": "4420" 00:24:39.093 }, 00:24:39.093 "secure_channel": true 00:24:39.093 } 00:24:39.093 } 00:24:39.093 ] 00:24:39.093 } 00:24:39.093 ] 00:24:39.093 }' 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=395913 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 395913 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 395913 ']' 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.093 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.094 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.354 [2024-10-08 17:40:31.119189] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:39.354 [2024-10-08 17:40:31.119260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.354 [2024-10-08 17:40:31.203871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.354 [2024-10-08 17:40:31.258476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.354 [2024-10-08 17:40:31.258509] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.354 [2024-10-08 17:40:31.258515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.354 [2024-10-08 17:40:31.258521] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.354 [2024-10-08 17:40:31.258526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.354 [2024-10-08 17:40:31.258995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.614 [2024-10-08 17:40:31.463837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.614 [2024-10-08 17:40:31.495859] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.614 [2024-10-08 17:40:31.496057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=396021 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 396021 /var/tmp/bdevperf.sock 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 396021 ']' 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.185 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:40.185 "subsystems": [ 00:24:40.185 { 00:24:40.185 "subsystem": "keyring", 00:24:40.185 "config": [ 00:24:40.185 { 00:24:40.185 "method": "keyring_file_add_key", 00:24:40.185 "params": { 00:24:40.185 "name": "key0", 00:24:40.185 "path": "/tmp/tmp.OQboOdiAzi" 00:24:40.185 } 00:24:40.185 } 00:24:40.185 ] 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "subsystem": "iobuf", 00:24:40.185 "config": [ 00:24:40.185 { 00:24:40.185 "method": "iobuf_set_options", 00:24:40.185 "params": { 00:24:40.185 "small_pool_count": 8192, 00:24:40.185 "large_pool_count": 1024, 00:24:40.185 "small_bufsize": 8192, 00:24:40.185 "large_bufsize": 135168 00:24:40.185 } 00:24:40.185 } 00:24:40.185 ] 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "subsystem": "sock", 00:24:40.185 "config": [ 00:24:40.185 { 00:24:40.185 "method": "sock_set_default_impl", 00:24:40.185 "params": { 00:24:40.185 "impl_name": "posix" 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "sock_impl_set_options", 00:24:40.185 "params": { 00:24:40.185 "impl_name": "ssl", 00:24:40.185 "recv_buf_size": 4096, 00:24:40.185 "send_buf_size": 4096, 00:24:40.185 "enable_recv_pipe": true, 00:24:40.185 "enable_quickack": false, 00:24:40.185 "enable_placement_id": 0, 00:24:40.185 "enable_zerocopy_send_server": true, 00:24:40.185 "enable_zerocopy_send_client": false, 00:24:40.185 "zerocopy_threshold": 0, 00:24:40.185 "tls_version": 0, 00:24:40.185 "enable_ktls": false 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "sock_impl_set_options", 00:24:40.185 "params": { 00:24:40.185 "impl_name": "posix", 00:24:40.185 "recv_buf_size": 2097152, 00:24:40.185 "send_buf_size": 2097152, 00:24:40.185 "enable_recv_pipe": true, 00:24:40.185 "enable_quickack": false, 00:24:40.185 "enable_placement_id": 0, 00:24:40.185 "enable_zerocopy_send_server": true, 00:24:40.185 "enable_zerocopy_send_client": false, 00:24:40.185 "zerocopy_threshold": 0, 00:24:40.185 "tls_version": 0, 00:24:40.185 "enable_ktls": false 00:24:40.185 } 00:24:40.185 } 00:24:40.185 ] 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "subsystem": "vmd", 00:24:40.185 "config": [] 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "subsystem": "accel", 00:24:40.185 "config": [ 00:24:40.185 { 00:24:40.185 "method": "accel_set_options", 00:24:40.185 "params": { 00:24:40.185 "small_cache_size": 128, 00:24:40.185 "large_cache_size": 16, 00:24:40.185 "task_count": 2048, 00:24:40.185 "sequence_count": 2048, 00:24:40.185 "buf_count": 2048 00:24:40.185 } 00:24:40.185 } 00:24:40.185 ] 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "subsystem": "bdev", 00:24:40.185 "config": [ 00:24:40.185 { 00:24:40.185 "method": "bdev_set_options", 00:24:40.185 "params": { 00:24:40.185 "bdev_io_pool_size": 65535, 00:24:40.185 "bdev_io_cache_size": 256, 00:24:40.185 "bdev_auto_examine": true, 00:24:40.185 "iobuf_small_cache_size": 128, 00:24:40.185 "iobuf_large_cache_size": 16 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_raid_set_options", 00:24:40.185 "params": { 00:24:40.185 "process_window_size_kb": 1024, 00:24:40.185 "process_max_bandwidth_mb_sec": 0 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_iscsi_set_options", 00:24:40.185 "params": { 00:24:40.185 "timeout_sec": 30 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_nvme_set_options", 00:24:40.185 "params": { 00:24:40.185 "action_on_timeout": "none", 00:24:40.185 "timeout_us": 0, 00:24:40.185 "timeout_admin_us": 0, 00:24:40.185 "keep_alive_timeout_ms": 10000, 00:24:40.185 "arbitration_burst": 0, 00:24:40.185 "low_priority_weight": 0, 00:24:40.185 "medium_priority_weight": 0, 00:24:40.185 "high_priority_weight": 0, 00:24:40.185 "nvme_adminq_poll_period_us": 10000, 00:24:40.185 "nvme_ioq_poll_period_us": 0, 00:24:40.185 "io_queue_requests": 512, 00:24:40.185 "delay_cmd_submit": true, 00:24:40.185 "transport_retry_count": 4, 00:24:40.185 "bdev_retry_count": 3, 00:24:40.185 "transport_ack_timeout": 0, 00:24:40.185 "ctrlr_loss_timeout_sec": 0, 00:24:40.185 "reconnect_delay_sec": 0, 00:24:40.185 "fast_io_fail_timeout_sec": 0, 00:24:40.185 "disable_auto_failback": false, 00:24:40.185 "generate_uuids": false, 00:24:40.185 "transport_tos": 0, 00:24:40.185 "nvme_error_stat": false, 00:24:40.185 "rdma_srq_size": 0, 00:24:40.185 "io_path_stat": false, 00:24:40.185 "allow_accel_sequence": false, 00:24:40.185 "rdma_max_cq_size": 0, 00:24:40.185 "rdma_cm_event_timeout_ms": 0, 00:24:40.185 "dhchap_digests": [ 00:24:40.185 "sha256", 00:24:40.185 "sha384", 00:24:40.185 "sha512" 00:24:40.185 ], 00:24:40.185 "dhchap_dhgroups": [ 00:24:40.185 "null", 00:24:40.185 "ffdhe2048", 00:24:40.185 "ffdhe3072", 00:24:40.185 "ffdhe4096", 00:24:40.185 "ffdhe6144", 00:24:40.185 "ffdhe8192" 00:24:40.185 ] 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_nvme_attach_controller", 00:24:40.185 "params": { 00:24:40.185 "name": "TLSTEST", 00:24:40.185 "trtype": "TCP", 00:24:40.185 "adrfam": "IPv4", 00:24:40.185 "traddr": "10.0.0.2", 00:24:40.185 "trsvcid": "4420", 00:24:40.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.185 "prchk_reftag": false, 00:24:40.185 "prchk_guard": false, 00:24:40.185 "ctrlr_loss_timeout_sec": 0, 00:24:40.185 "reconnect_delay_sec": 0, 00:24:40.185 "fast_io_fail_timeout_sec": 0, 00:24:40.185 "psk": "key0", 00:24:40.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.185 "hdgst": false, 00:24:40.185 "ddgst": false, 00:24:40.185 "multipath": "multipath" 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_nvme_set_hotplug", 00:24:40.185 "params": { 00:24:40.185 "period_us": 100000, 00:24:40.185 "enable": false 00:24:40.185 } 00:24:40.185 }, 00:24:40.185 { 00:24:40.185 "method": "bdev_wait_for_examine" 00:24:40.185 } 00:24:40.185 ] 00:24:40.185 }, 00:24:40.186 { 00:24:40.186 "subsystem": "nbd", 00:24:40.186 "config": [] 00:24:40.186 } 00:24:40.186 ] 00:24:40.186 }' 00:24:40.186 [2024-10-08 17:40:31.992523] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:40.186 [2024-10-08 17:40:31.992572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396021 ] 00:24:40.186 [2024-10-08 17:40:32.070059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.186 [2024-10-08 17:40:32.133034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.446 [2024-10-08 17:40:32.272389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.016 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.016 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:41.016 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:41.016 Running I/O for 10 seconds... 00:24:42.898 4095.00 IOPS, 16.00 MiB/s [2024-10-08T15:40:36.272Z] 4396.00 IOPS, 17.17 MiB/s [2024-10-08T15:40:37.212Z] 4700.00 IOPS, 18.36 MiB/s [2024-10-08T15:40:38.153Z] 4863.00 IOPS, 19.00 MiB/s [2024-10-08T15:40:39.094Z] 4988.80 IOPS, 19.49 MiB/s [2024-10-08T15:40:40.035Z] 4490.33 IOPS, 17.54 MiB/s [2024-10-08T15:40:40.975Z] 4236.71 IOPS, 16.55 MiB/s [2024-10-08T15:40:41.914Z] 4144.00 IOPS, 16.19 MiB/s [2024-10-08T15:40:43.296Z] 4201.78 IOPS, 16.41 MiB/s [2024-10-08T15:40:43.296Z] 3973.30 IOPS, 15.52 MiB/s 00:24:51.304 Latency(us) 00:24:51.304 [2024-10-08T15:40:43.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.304 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:51.304 Verification LBA range: start 0x0 length 0x2000 00:24:51.304 TLSTESTn1 : 10.09 3952.18 15.44 0.00 0.00 32263.75 5870.93 84322.99 00:24:51.304 [2024-10-08T15:40:43.296Z] =================================================================================================================== 00:24:51.304 [2024-10-08T15:40:43.296Z] Total : 3952.18 15.44 0.00 0.00 32263.75 5870.93 84322.99 00:24:51.304 { 00:24:51.304 "results": [ 00:24:51.304 { 00:24:51.304 "job": "TLSTESTn1", 00:24:51.304 "core_mask": "0x4", 00:24:51.304 "workload": "verify", 00:24:51.304 "status": "finished", 00:24:51.304 "verify_range": { 00:24:51.304 "start": 0, 00:24:51.304 "length": 8192 00:24:51.304 }, 00:24:51.304 "queue_depth": 128, 00:24:51.304 "io_size": 4096, 00:24:51.304 "runtime": 10.085814, 00:24:51.304 "iops": 3952.1847220264026, 00:24:51.304 "mibps": 15.438221570415635, 00:24:51.304 "io_failed": 0, 00:24:51.304 "io_timeout": 0, 00:24:51.304 "avg_latency_us": 32263.746685063932, 00:24:51.304 "min_latency_us": 5870.933333333333, 00:24:51.304 "max_latency_us": 84322.98666666666 00:24:51.304 } 00:24:51.304 ], 00:24:51.304 "core_count": 1 00:24:51.304 } 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 396021 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 396021 ']' 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 396021 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.304 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 396021 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 396021' 00:24:51.305 killing process with pid 396021 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 396021 00:24:51.305 Received shutdown signal, test time was about 10.000000 seconds 00:24:51.305 00:24:51.305 Latency(us) 00:24:51.305 [2024-10-08T15:40:43.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.305 [2024-10-08T15:40:43.297Z] =================================================================================================================== 00:24:51.305 [2024-10-08T15:40:43.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 396021 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 395913 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 395913 ']' 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 395913 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395913 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395913' 00:24:51.305 killing process with pid 395913 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 395913 00:24:51.305 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 395913 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=398288 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 398288 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 398288 ']' 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.566 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.566 [2024-10-08 17:40:43.415427] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:51.566 [2024-10-08 17:40:43.415480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.566 [2024-10-08 17:40:43.501969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.827 [2024-10-08 17:40:43.592532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.827 [2024-10-08 17:40:43.592596] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.827 [2024-10-08 17:40:43.592604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.827 [2024-10-08 17:40:43.592611] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.827 [2024-10-08 17:40:43.592618] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.827 [2024-10-08 17:40:43.593439] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.OQboOdiAzi 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OQboOdiAzi 00:24:52.398 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:52.659 [2024-10-08 17:40:44.443360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.659 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:52.919 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:52.919 [2024-10-08 17:40:44.836351] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:52.919 [2024-10-08 17:40:44.836714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.919 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:53.180 malloc0 00:24:53.180 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:53.440 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=398652 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 398652 /var/tmp/bdevperf.sock 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 398652 ']' 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.701 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.961 [2024-10-08 17:40:45.716308] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:53.961 [2024-10-08 17:40:45.716375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398652 ] 00:24:53.961 [2024-10-08 17:40:45.797328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.961 [2024-10-08 17:40:45.859076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.533 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.533 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:54.533 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:54.793 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:55.053 [2024-10-08 17:40:46.840186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.053 nvme0n1 00:24:55.053 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.053 Running I/O for 1 seconds... 00:24:56.436 870.00 IOPS, 3.40 MiB/s 00:24:56.436 Latency(us) 00:24:56.436 [2024-10-08T15:40:48.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.436 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:56.436 Verification LBA range: start 0x0 length 0x2000 00:24:56.436 nvme0n1 : 1.08 921.07 3.60 0.00 0.00 134945.19 6580.91 200977.07 00:24:56.436 [2024-10-08T15:40:48.428Z] =================================================================================================================== 00:24:56.436 [2024-10-08T15:40:48.428Z] Total : 921.07 3.60 0.00 0.00 134945.19 6580.91 200977.07 00:24:56.436 { 00:24:56.436 "results": [ 00:24:56.436 { 00:24:56.436 "job": "nvme0n1", 00:24:56.436 "core_mask": "0x2", 00:24:56.436 "workload": "verify", 00:24:56.436 "status": "finished", 00:24:56.436 "verify_range": { 00:24:56.436 "start": 0, 00:24:56.436 "length": 8192 00:24:56.436 }, 00:24:56.436 "queue_depth": 128, 00:24:56.436 "io_size": 4096, 00:24:56.436 "runtime": 1.084606, 00:24:56.436 "iops": 921.0717993446468, 00:24:56.436 "mibps": 3.5979367161900266, 00:24:56.436 "io_failed": 0, 00:24:56.436 "io_timeout": 0, 00:24:56.436 "avg_latency_us": 134945.1868935602, 00:24:56.436 "min_latency_us": 6580.906666666667, 00:24:56.436 "max_latency_us": 200977.06666666668 00:24:56.436 } 00:24:56.436 ], 00:24:56.436 "core_count": 1 00:24:56.436 } 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 398652 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 398652 ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 398652 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 398652 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 398652' 00:24:56.436 killing process with pid 398652 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 398652 00:24:56.436 Received shutdown signal, test time was about 1.000000 seconds 00:24:56.436 00:24:56.436 Latency(us) 00:24:56.436 [2024-10-08T15:40:48.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.436 [2024-10-08T15:40:48.428Z] =================================================================================================================== 00:24:56.436 [2024-10-08T15:40:48.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 398652 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 398288 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 398288 ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 398288 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 398288 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 398288' 00:24:56.436 killing process with pid 398288 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 398288 00:24:56.436 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 398288 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=399332 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 399332 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399332 ']' 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.698 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.698 [2024-10-08 17:40:48.598477] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:56.698 [2024-10-08 17:40:48.598531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.698 [2024-10-08 17:40:48.686146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.959 [2024-10-08 17:40:48.778116] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.959 [2024-10-08 17:40:48.778178] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.959 [2024-10-08 17:40:48.778187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.959 [2024-10-08 17:40:48.778194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.959 [2024-10-08 17:40:48.778201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.959 [2024-10-08 17:40:48.779001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.530 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.530 [2024-10-08 17:40:49.475287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.530 malloc0 00:24:57.530 [2024-10-08 17:40:49.519613] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.530 [2024-10-08 17:40:49.519954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=399512 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 399512 /var/tmp/bdevperf.sock 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 399512 ']' 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.790 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.790 [2024-10-08 17:40:49.605964] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:24:57.790 [2024-10-08 17:40:49.606033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399512 ] 00:24:57.790 [2024-10-08 17:40:49.687293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.790 [2024-10-08 17:40:49.748525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.732 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.732 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:58.732 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OQboOdiAzi 00:24:58.732 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:58.732 [2024-10-08 17:40:50.705195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:58.992 nvme0n1 00:24:58.992 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:58.992 Running I/O for 1 seconds... 00:25:00.193 3921.00 IOPS, 15.32 MiB/s 00:25:00.193 Latency(us) 00:25:00.193 [2024-10-08T15:40:52.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.193 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:00.193 Verification LBA range: start 0x0 length 0x2000 00:25:00.193 nvme0n1 : 1.04 3905.42 15.26 0.00 0.00 32300.70 5242.88 66409.81 00:25:00.193 [2024-10-08T15:40:52.185Z] =================================================================================================================== 00:25:00.193 [2024-10-08T15:40:52.185Z] Total : 3905.42 15.26 0.00 0.00 32300.70 5242.88 66409.81 00:25:00.193 { 00:25:00.193 "results": [ 00:25:00.193 { 00:25:00.193 "job": "nvme0n1", 00:25:00.193 "core_mask": "0x2", 00:25:00.193 "workload": "verify", 00:25:00.193 "status": "finished", 00:25:00.193 "verify_range": { 00:25:00.193 "start": 0, 00:25:00.193 "length": 8192 00:25:00.193 }, 00:25:00.193 "queue_depth": 128, 00:25:00.193 "io_size": 4096, 00:25:00.193 "runtime": 1.036764, 00:25:00.193 "iops": 3905.421098726422, 00:25:00.193 "mibps": 15.255551166900085, 00:25:00.193 "io_failed": 0, 00:25:00.193 "io_timeout": 0, 00:25:00.193 "avg_latency_us": 32300.701262863262, 00:25:00.193 "min_latency_us": 5242.88, 00:25:00.193 "max_latency_us": 66409.81333333334 00:25:00.193 } 00:25:00.193 ], 00:25:00.193 "core_count": 1 00:25:00.193 } 00:25:00.193 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:00.193 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.193 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.193 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.193 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:00.193 "subsystems": [ 00:25:00.193 { 00:25:00.193 "subsystem": "keyring", 00:25:00.193 "config": [ 00:25:00.193 { 00:25:00.193 "method": "keyring_file_add_key", 00:25:00.193 "params": { 00:25:00.193 "name": "key0", 00:25:00.193 "path": "/tmp/tmp.OQboOdiAzi" 00:25:00.193 } 00:25:00.193 } 00:25:00.193 ] 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "subsystem": "iobuf", 00:25:00.193 "config": [ 00:25:00.193 { 00:25:00.193 "method": "iobuf_set_options", 00:25:00.193 "params": { 00:25:00.193 "small_pool_count": 8192, 00:25:00.193 "large_pool_count": 1024, 00:25:00.193 "small_bufsize": 8192, 00:25:00.193 "large_bufsize": 135168 00:25:00.193 } 00:25:00.193 } 00:25:00.193 ] 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "subsystem": "sock", 00:25:00.193 "config": [ 00:25:00.193 { 00:25:00.193 "method": "sock_set_default_impl", 00:25:00.193 "params": { 00:25:00.193 "impl_name": "posix" 00:25:00.193 } 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "method": "sock_impl_set_options", 00:25:00.193 "params": { 00:25:00.193 "impl_name": "ssl", 00:25:00.193 "recv_buf_size": 4096, 00:25:00.193 "send_buf_size": 4096, 00:25:00.193 "enable_recv_pipe": true, 00:25:00.193 "enable_quickack": false, 00:25:00.193 "enable_placement_id": 0, 00:25:00.193 "enable_zerocopy_send_server": true, 00:25:00.193 "enable_zerocopy_send_client": false, 00:25:00.193 "zerocopy_threshold": 0, 00:25:00.193 "tls_version": 0, 00:25:00.193 "enable_ktls": false 00:25:00.193 } 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "method": "sock_impl_set_options", 00:25:00.193 "params": { 00:25:00.193 "impl_name": "posix", 00:25:00.193 "recv_buf_size": 2097152, 00:25:00.193 "send_buf_size": 2097152, 00:25:00.193 "enable_recv_pipe": true, 00:25:00.193 "enable_quickack": false, 00:25:00.193 "enable_placement_id": 0, 00:25:00.193 "enable_zerocopy_send_server": true, 00:25:00.193 "enable_zerocopy_send_client": false, 00:25:00.193 "zerocopy_threshold": 0, 00:25:00.193 "tls_version": 0, 00:25:00.193 "enable_ktls": false 00:25:00.193 } 00:25:00.193 } 00:25:00.193 ] 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "subsystem": "vmd", 00:25:00.193 "config": [] 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "subsystem": "accel", 00:25:00.193 "config": [ 00:25:00.193 { 00:25:00.193 "method": "accel_set_options", 00:25:00.193 "params": { 00:25:00.193 "small_cache_size": 128, 00:25:00.193 "large_cache_size": 16, 00:25:00.193 "task_count": 2048, 00:25:00.193 "sequence_count": 2048, 00:25:00.193 "buf_count": 2048 00:25:00.193 } 00:25:00.193 } 00:25:00.193 ] 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "subsystem": "bdev", 00:25:00.193 "config": [ 00:25:00.193 { 00:25:00.193 "method": "bdev_set_options", 00:25:00.193 "params": { 00:25:00.193 "bdev_io_pool_size": 65535, 00:25:00.193 "bdev_io_cache_size": 256, 00:25:00.193 "bdev_auto_examine": true, 00:25:00.193 "iobuf_small_cache_size": 128, 00:25:00.193 "iobuf_large_cache_size": 16 00:25:00.193 } 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "method": "bdev_raid_set_options", 00:25:00.193 "params": { 00:25:00.193 "process_window_size_kb": 1024, 00:25:00.193 "process_max_bandwidth_mb_sec": 0 00:25:00.193 } 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "method": "bdev_iscsi_set_options", 00:25:00.193 "params": { 00:25:00.193 "timeout_sec": 30 00:25:00.193 } 00:25:00.193 }, 00:25:00.193 { 00:25:00.193 "method": "bdev_nvme_set_options", 00:25:00.193 "params": { 00:25:00.193 "action_on_timeout": "none", 00:25:00.193 "timeout_us": 0, 00:25:00.193 "timeout_admin_us": 0, 00:25:00.193 "keep_alive_timeout_ms": 10000, 00:25:00.193 "arbitration_burst": 0, 00:25:00.193 "low_priority_weight": 0, 00:25:00.193 "medium_priority_weight": 0, 00:25:00.193 "high_priority_weight": 0, 00:25:00.193 "nvme_adminq_poll_period_us": 10000, 00:25:00.193 "nvme_ioq_poll_period_us": 0, 00:25:00.193 "io_queue_requests": 0, 00:25:00.193 "delay_cmd_submit": true, 00:25:00.193 "transport_retry_count": 4, 00:25:00.193 "bdev_retry_count": 3, 00:25:00.193 "transport_ack_timeout": 0, 00:25:00.193 "ctrlr_loss_timeout_sec": 0, 00:25:00.193 "reconnect_delay_sec": 0, 00:25:00.193 "fast_io_fail_timeout_sec": 0, 00:25:00.193 "disable_auto_failback": false, 00:25:00.193 "generate_uuids": false, 00:25:00.193 "transport_tos": 0, 00:25:00.193 "nvme_error_stat": false, 00:25:00.193 "rdma_srq_size": 0, 00:25:00.193 "io_path_stat": false, 00:25:00.193 "allow_accel_sequence": false, 00:25:00.193 "rdma_max_cq_size": 0, 00:25:00.193 "rdma_cm_event_timeout_ms": 0, 00:25:00.193 "dhchap_digests": [ 00:25:00.193 "sha256", 00:25:00.194 "sha384", 00:25:00.194 "sha512" 00:25:00.194 ], 00:25:00.194 "dhchap_dhgroups": [ 00:25:00.194 "null", 00:25:00.194 "ffdhe2048", 00:25:00.194 "ffdhe3072", 00:25:00.194 "ffdhe4096", 00:25:00.194 "ffdhe6144", 00:25:00.194 "ffdhe8192" 00:25:00.194 ] 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "bdev_nvme_set_hotplug", 00:25:00.194 "params": { 00:25:00.194 "period_us": 100000, 00:25:00.194 "enable": false 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "bdev_malloc_create", 00:25:00.194 "params": { 00:25:00.194 "name": "malloc0", 00:25:00.194 "num_blocks": 8192, 00:25:00.194 "block_size": 4096, 00:25:00.194 "physical_block_size": 4096, 00:25:00.194 "uuid": "a97a8795-9ed2-4081-a6d8-825649567a40", 00:25:00.194 "optimal_io_boundary": 0, 00:25:00.194 "md_size": 0, 00:25:00.194 "dif_type": 0, 00:25:00.194 "dif_is_head_of_md": false, 00:25:00.194 "dif_pi_format": 0 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "bdev_wait_for_examine" 00:25:00.194 } 00:25:00.194 ] 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "subsystem": "nbd", 00:25:00.194 "config": [] 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "subsystem": "scheduler", 00:25:00.194 "config": [ 00:25:00.194 { 00:25:00.194 "method": "framework_set_scheduler", 00:25:00.194 "params": { 00:25:00.194 "name": "static" 00:25:00.194 } 00:25:00.194 } 00:25:00.194 ] 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "subsystem": "nvmf", 00:25:00.194 "config": [ 00:25:00.194 { 00:25:00.194 "method": "nvmf_set_config", 00:25:00.194 "params": { 00:25:00.194 "discovery_filter": "match_any", 00:25:00.194 "admin_cmd_passthru": { 00:25:00.194 "identify_ctrlr": false 00:25:00.194 }, 00:25:00.194 "dhchap_digests": [ 00:25:00.194 "sha256", 00:25:00.194 "sha384", 00:25:00.194 "sha512" 00:25:00.194 ], 00:25:00.194 "dhchap_dhgroups": [ 00:25:00.194 "null", 00:25:00.194 "ffdhe2048", 00:25:00.194 "ffdhe3072", 00:25:00.194 "ffdhe4096", 00:25:00.194 "ffdhe6144", 00:25:00.194 "ffdhe8192" 00:25:00.194 ] 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_set_max_subsystems", 00:25:00.194 "params": { 00:25:00.194 "max_subsystems": 1024 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_set_crdt", 00:25:00.194 "params": { 00:25:00.194 "crdt1": 0, 00:25:00.194 "crdt2": 0, 00:25:00.194 "crdt3": 0 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_create_transport", 00:25:00.194 "params": { 00:25:00.194 "trtype": "TCP", 00:25:00.194 "max_queue_depth": 128, 00:25:00.194 "max_io_qpairs_per_ctrlr": 127, 00:25:00.194 "in_capsule_data_size": 4096, 00:25:00.194 "max_io_size": 131072, 00:25:00.194 "io_unit_size": 131072, 00:25:00.194 "max_aq_depth": 128, 00:25:00.194 "num_shared_buffers": 511, 00:25:00.194 "buf_cache_size": 4294967295, 00:25:00.194 "dif_insert_or_strip": false, 00:25:00.194 "zcopy": false, 00:25:00.194 "c2h_success": false, 00:25:00.194 "sock_priority": 0, 00:25:00.194 "abort_timeout_sec": 1, 00:25:00.194 "ack_timeout": 0, 00:25:00.194 "data_wr_pool_size": 0 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_create_subsystem", 00:25:00.194 "params": { 00:25:00.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.194 "allow_any_host": false, 00:25:00.194 "serial_number": "00000000000000000000", 00:25:00.194 "model_number": "SPDK bdev Controller", 00:25:00.194 "max_namespaces": 32, 00:25:00.194 "min_cntlid": 1, 00:25:00.194 "max_cntlid": 65519, 00:25:00.194 "ana_reporting": false 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_subsystem_add_host", 00:25:00.194 "params": { 00:25:00.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.194 "host": "nqn.2016-06.io.spdk:host1", 00:25:00.194 "psk": "key0" 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_subsystem_add_ns", 00:25:00.194 "params": { 00:25:00.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.194 "namespace": { 00:25:00.194 "nsid": 1, 00:25:00.194 "bdev_name": "malloc0", 00:25:00.194 "nguid": "A97A87959ED24081A6D8825649567A40", 00:25:00.194 "uuid": "a97a8795-9ed2-4081-a6d8-825649567a40", 00:25:00.194 "no_auto_visible": false 00:25:00.194 } 00:25:00.194 } 00:25:00.194 }, 00:25:00.194 { 00:25:00.194 "method": "nvmf_subsystem_add_listener", 00:25:00.194 "params": { 00:25:00.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.194 "listen_address": { 00:25:00.194 "trtype": "TCP", 00:25:00.194 "adrfam": "IPv4", 00:25:00.194 "traddr": "10.0.0.2", 00:25:00.194 "trsvcid": "4420" 00:25:00.194 }, 00:25:00.194 "secure_channel": false, 00:25:00.194 "sock_impl": "ssl" 00:25:00.194 } 00:25:00.194 } 00:25:00.194 ] 00:25:00.194 } 00:25:00.194 ] 00:25:00.194 }' 00:25:00.194 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:00.454 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:00.454 "subsystems": [ 00:25:00.454 { 00:25:00.455 "subsystem": "keyring", 00:25:00.455 "config": [ 00:25:00.455 { 00:25:00.455 "method": "keyring_file_add_key", 00:25:00.455 "params": { 00:25:00.455 "name": "key0", 00:25:00.455 "path": "/tmp/tmp.OQboOdiAzi" 00:25:00.455 } 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "iobuf", 00:25:00.455 "config": [ 00:25:00.455 { 00:25:00.455 "method": "iobuf_set_options", 00:25:00.455 "params": { 00:25:00.455 "small_pool_count": 8192, 00:25:00.455 "large_pool_count": 1024, 00:25:00.455 "small_bufsize": 8192, 00:25:00.455 "large_bufsize": 135168 00:25:00.455 } 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "sock", 00:25:00.455 "config": [ 00:25:00.455 { 00:25:00.455 "method": "sock_set_default_impl", 00:25:00.455 "params": { 00:25:00.455 "impl_name": "posix" 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "sock_impl_set_options", 00:25:00.455 "params": { 00:25:00.455 "impl_name": "ssl", 00:25:00.455 "recv_buf_size": 4096, 00:25:00.455 "send_buf_size": 4096, 00:25:00.455 "enable_recv_pipe": true, 00:25:00.455 "enable_quickack": false, 00:25:00.455 "enable_placement_id": 0, 00:25:00.455 "enable_zerocopy_send_server": true, 00:25:00.455 "enable_zerocopy_send_client": false, 00:25:00.455 "zerocopy_threshold": 0, 00:25:00.455 "tls_version": 0, 00:25:00.455 "enable_ktls": false 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "sock_impl_set_options", 00:25:00.455 "params": { 00:25:00.455 "impl_name": "posix", 00:25:00.455 "recv_buf_size": 2097152, 00:25:00.455 "send_buf_size": 2097152, 00:25:00.455 "enable_recv_pipe": true, 00:25:00.455 "enable_quickack": false, 00:25:00.455 "enable_placement_id": 0, 00:25:00.455 "enable_zerocopy_send_server": true, 00:25:00.455 "enable_zerocopy_send_client": false, 00:25:00.455 "zerocopy_threshold": 0, 00:25:00.455 "tls_version": 0, 00:25:00.455 "enable_ktls": false 00:25:00.455 } 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "vmd", 00:25:00.455 "config": [] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "accel", 00:25:00.455 "config": [ 00:25:00.455 { 00:25:00.455 "method": "accel_set_options", 00:25:00.455 "params": { 00:25:00.455 "small_cache_size": 128, 00:25:00.455 "large_cache_size": 16, 00:25:00.455 "task_count": 2048, 00:25:00.455 "sequence_count": 2048, 00:25:00.455 "buf_count": 2048 00:25:00.455 } 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "bdev", 00:25:00.455 "config": [ 00:25:00.455 { 00:25:00.455 "method": "bdev_set_options", 00:25:00.455 "params": { 00:25:00.455 "bdev_io_pool_size": 65535, 00:25:00.455 "bdev_io_cache_size": 256, 00:25:00.455 "bdev_auto_examine": true, 00:25:00.455 "iobuf_small_cache_size": 128, 00:25:00.455 "iobuf_large_cache_size": 16 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_raid_set_options", 00:25:00.455 "params": { 00:25:00.455 "process_window_size_kb": 1024, 00:25:00.455 "process_max_bandwidth_mb_sec": 0 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_iscsi_set_options", 00:25:00.455 "params": { 00:25:00.455 "timeout_sec": 30 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_nvme_set_options", 00:25:00.455 "params": { 00:25:00.455 "action_on_timeout": "none", 00:25:00.455 "timeout_us": 0, 00:25:00.455 "timeout_admin_us": 0, 00:25:00.455 "keep_alive_timeout_ms": 10000, 00:25:00.455 "arbitration_burst": 0, 00:25:00.455 "low_priority_weight": 0, 00:25:00.455 "medium_priority_weight": 0, 00:25:00.455 "high_priority_weight": 0, 00:25:00.455 "nvme_adminq_poll_period_us": 10000, 00:25:00.455 "nvme_ioq_poll_period_us": 0, 00:25:00.455 "io_queue_requests": 512, 00:25:00.455 "delay_cmd_submit": true, 00:25:00.455 "transport_retry_count": 4, 00:25:00.455 "bdev_retry_count": 3, 00:25:00.455 "transport_ack_timeout": 0, 00:25:00.455 "ctrlr_loss_timeout_sec": 0, 00:25:00.455 "reconnect_delay_sec": 0, 00:25:00.455 "fast_io_fail_timeout_sec": 0, 00:25:00.455 "disable_auto_failback": false, 00:25:00.455 "generate_uuids": false, 00:25:00.455 "transport_tos": 0, 00:25:00.455 "nvme_error_stat": false, 00:25:00.455 "rdma_srq_size": 0, 00:25:00.455 "io_path_stat": false, 00:25:00.455 "allow_accel_sequence": false, 00:25:00.455 "rdma_max_cq_size": 0, 00:25:00.455 "rdma_cm_event_timeout_ms": 0, 00:25:00.455 "dhchap_digests": [ 00:25:00.455 "sha256", 00:25:00.455 "sha384", 00:25:00.455 "sha512" 00:25:00.455 ], 00:25:00.455 "dhchap_dhgroups": [ 00:25:00.455 "null", 00:25:00.455 "ffdhe2048", 00:25:00.455 "ffdhe3072", 00:25:00.455 "ffdhe4096", 00:25:00.455 "ffdhe6144", 00:25:00.455 "ffdhe8192" 00:25:00.455 ] 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_nvme_attach_controller", 00:25:00.455 "params": { 00:25:00.455 "name": "nvme0", 00:25:00.455 "trtype": "TCP", 00:25:00.455 "adrfam": "IPv4", 00:25:00.455 "traddr": "10.0.0.2", 00:25:00.455 "trsvcid": "4420", 00:25:00.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.455 "prchk_reftag": false, 00:25:00.455 "prchk_guard": false, 00:25:00.455 "ctrlr_loss_timeout_sec": 0, 00:25:00.455 "reconnect_delay_sec": 0, 00:25:00.455 "fast_io_fail_timeout_sec": 0, 00:25:00.455 "psk": "key0", 00:25:00.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:00.455 "hdgst": false, 00:25:00.455 "ddgst": false, 00:25:00.455 "multipath": "multipath" 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_nvme_set_hotplug", 00:25:00.455 "params": { 00:25:00.455 "period_us": 100000, 00:25:00.455 "enable": false 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_enable_histogram", 00:25:00.455 "params": { 00:25:00.455 "name": "nvme0n1", 00:25:00.455 "enable": true 00:25:00.455 } 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "method": "bdev_wait_for_examine" 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }, 00:25:00.455 { 00:25:00.455 "subsystem": "nbd", 00:25:00.455 "config": [] 00:25:00.455 } 00:25:00.455 ] 00:25:00.455 }' 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 399512 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399512 ']' 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399512 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399512 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399512' 00:25:00.455 killing process with pid 399512 00:25:00.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399512 00:25:00.455 Received shutdown signal, test time was about 1.000000 seconds 00:25:00.455 00:25:00.456 Latency(us) 00:25:00.456 [2024-10-08T15:40:52.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.456 [2024-10-08T15:40:52.448Z] =================================================================================================================== 00:25:00.456 [2024-10-08T15:40:52.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399512 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 399332 ']' 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399332' 00:25:00.716 killing process with pid 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 399332 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.716 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:00.716 "subsystems": [ 00:25:00.716 { 00:25:00.716 "subsystem": "keyring", 00:25:00.716 "config": [ 00:25:00.716 { 00:25:00.716 "method": "keyring_file_add_key", 00:25:00.716 "params": { 00:25:00.716 "name": "key0", 00:25:00.716 "path": "/tmp/tmp.OQboOdiAzi" 00:25:00.716 } 00:25:00.716 } 00:25:00.716 ] 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "subsystem": "iobuf", 00:25:00.716 "config": [ 00:25:00.716 { 00:25:00.716 "method": "iobuf_set_options", 00:25:00.716 "params": { 00:25:00.716 "small_pool_count": 8192, 00:25:00.716 "large_pool_count": 1024, 00:25:00.716 "small_bufsize": 8192, 00:25:00.716 "large_bufsize": 135168 00:25:00.716 } 00:25:00.716 } 00:25:00.716 ] 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "subsystem": "sock", 00:25:00.716 "config": [ 00:25:00.716 { 00:25:00.716 "method": "sock_set_default_impl", 00:25:00.716 "params": { 00:25:00.716 "impl_name": "posix" 00:25:00.716 } 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "method": "sock_impl_set_options", 00:25:00.716 "params": { 00:25:00.716 "impl_name": "ssl", 00:25:00.716 "recv_buf_size": 4096, 00:25:00.716 "send_buf_size": 4096, 00:25:00.716 "enable_recv_pipe": true, 00:25:00.716 "enable_quickack": false, 00:25:00.716 "enable_placement_id": 0, 00:25:00.716 "enable_zerocopy_send_server": true, 00:25:00.716 "enable_zerocopy_send_client": false, 00:25:00.716 "zerocopy_threshold": 0, 00:25:00.716 "tls_version": 0, 00:25:00.716 "enable_ktls": false 00:25:00.716 } 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "method": "sock_impl_set_options", 00:25:00.716 "params": { 00:25:00.716 "impl_name": "posix", 00:25:00.716 "recv_buf_size": 2097152, 00:25:00.716 "send_buf_size": 2097152, 00:25:00.716 "enable_recv_pipe": true, 00:25:00.716 "enable_quickack": false, 00:25:00.716 "enable_placement_id": 0, 00:25:00.716 "enable_zerocopy_send_server": true, 00:25:00.716 "enable_zerocopy_send_client": false, 00:25:00.716 "zerocopy_threshold": 0, 00:25:00.716 "tls_version": 0, 00:25:00.716 "enable_ktls": false 00:25:00.716 } 00:25:00.716 } 00:25:00.716 ] 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "subsystem": "vmd", 00:25:00.716 "config": [] 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "subsystem": "accel", 00:25:00.716 "config": [ 00:25:00.716 { 00:25:00.716 "method": "accel_set_options", 00:25:00.716 "params": { 00:25:00.716 "small_cache_size": 128, 00:25:00.716 "large_cache_size": 16, 00:25:00.716 "task_count": 2048, 00:25:00.716 "sequence_count": 2048, 00:25:00.716 "buf_count": 2048 00:25:00.716 } 00:25:00.716 } 00:25:00.716 ] 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "subsystem": "bdev", 00:25:00.716 "config": [ 00:25:00.716 { 00:25:00.716 "method": "bdev_set_options", 00:25:00.716 "params": { 00:25:00.716 "bdev_io_pool_size": 65535, 00:25:00.716 "bdev_io_cache_size": 256, 00:25:00.716 "bdev_auto_examine": true, 00:25:00.716 "iobuf_small_cache_size": 128, 00:25:00.716 "iobuf_large_cache_size": 16 00:25:00.716 } 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "method": "bdev_raid_set_options", 00:25:00.716 "params": { 00:25:00.716 "process_window_size_kb": 1024, 00:25:00.716 "process_max_bandwidth_mb_sec": 0 00:25:00.716 } 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "method": "bdev_iscsi_set_options", 00:25:00.716 "params": { 00:25:00.716 "timeout_sec": 30 00:25:00.716 } 00:25:00.716 }, 00:25:00.716 { 00:25:00.716 "method": "bdev_nvme_set_options", 00:25:00.716 "params": { 00:25:00.716 "action_on_timeout": "none", 00:25:00.716 "timeout_us": 0, 00:25:00.716 "timeout_admin_us": 0, 00:25:00.716 "keep_alive_timeout_ms": 10000, 00:25:00.716 "arbitration_burst": 0, 00:25:00.716 "low_priority_weight": 0, 00:25:00.716 "medium_priority_weight": 0, 00:25:00.716 "high_priority_weight": 0, 00:25:00.716 "nvme_adminq_poll_period_us": 10000, 00:25:00.716 "nvme_ioq_poll_period_us": 0, 00:25:00.716 "io_queue_requests": 0, 00:25:00.716 "delay_cmd_submit": true, 00:25:00.716 "transport_retry_count": 4, 00:25:00.716 "bdev_retry_count": 3, 00:25:00.716 "transport_ack_timeout": 0, 00:25:00.716 "ctrlr_loss_timeout_sec": 0, 00:25:00.716 "reconnect_delay_sec": 0, 00:25:00.716 "fast_io_fail_timeout_sec": 0, 00:25:00.716 "disable_auto_failback": false, 00:25:00.717 "generate_uuids": false, 00:25:00.717 "transport_tos": 0, 00:25:00.717 "nvme_error_stat": false, 00:25:00.717 "rdma_srq_size": 0, 00:25:00.717 "io_path_stat": false, 00:25:00.717 "allow_accel_sequence": false, 00:25:00.717 "rdma_max_cq_size": 0, 00:25:00.717 "rdma_cm_event_timeout_ms": 0, 00:25:00.717 "dhchap_digests": [ 00:25:00.717 "sha256", 00:25:00.717 "sha384", 00:25:00.717 "sha512" 00:25:00.717 ], 00:25:00.717 "dhchap_dhgroups": [ 00:25:00.717 "null", 00:25:00.717 "ffdhe2048", 00:25:00.717 "ffdhe3072", 00:25:00.717 "ffdhe4096", 00:25:00.717 "ffdhe6144", 00:25:00.717 "ffdhe8192" 00:25:00.717 ] 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "bdev_nvme_set_hotplug", 00:25:00.717 "params": { 00:25:00.717 "period_us": 100000, 00:25:00.717 "enable": false 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "bdev_malloc_create", 00:25:00.717 "params": { 00:25:00.717 "name": "malloc0", 00:25:00.717 "num_blocks": 8192, 00:25:00.717 "block_size": 4096, 00:25:00.717 "physical_block_size": 4096, 00:25:00.717 "uuid": "a97a8795-9ed2-4081-a6d8-825649567a40", 00:25:00.717 "optimal_io_boundary": 0, 00:25:00.717 "md_size": 0, 00:25:00.717 "dif_type": 0, 00:25:00.717 "dif_is_head_of_md": false, 00:25:00.717 "dif_pi_format": 0 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "bdev_wait_for_examine" 00:25:00.717 } 00:25:00.717 ] 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "subsystem": "nbd", 00:25:00.717 "config": [] 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "subsystem": "scheduler", 00:25:00.717 "config": [ 00:25:00.717 { 00:25:00.717 "method": "framework_set_scheduler", 00:25:00.717 "params": { 00:25:00.717 "name": "static" 00:25:00.717 } 00:25:00.717 } 00:25:00.717 ] 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "subsystem": "nvmf", 00:25:00.717 "config": [ 00:25:00.717 { 00:25:00.717 "method": "nvmf_set_config", 00:25:00.717 "params": { 00:25:00.717 "discovery_filter": "match_any", 00:25:00.717 "admin_cmd_passthru": { 00:25:00.717 "identify_ctrlr": false 00:25:00.717 }, 00:25:00.717 "dhchap_digests": [ 00:25:00.717 "sha256", 00:25:00.717 "sha384", 00:25:00.717 "sha512" 00:25:00.717 ], 00:25:00.717 "dhchap_dhgroups": [ 00:25:00.717 "null", 00:25:00.717 "ffdhe2048", 00:25:00.717 "ffdhe3072", 00:25:00.717 "ffdhe4096", 00:25:00.717 "ffdhe6144", 00:25:00.717 "ffdhe8192" 00:25:00.717 ] 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_set_max_subsystems", 00:25:00.717 "params": { 00:25:00.717 "max_subsystems": 1024 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_set_crdt", 00:25:00.717 "params": { 00:25:00.717 "crdt1": 0, 00:25:00.717 "crdt2": 0, 00:25:00.717 "crdt3": 0 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_create_transport", 00:25:00.717 "params": { 00:25:00.717 "trtype": "TCP", 00:25:00.717 "max_queue_depth": 128, 00:25:00.717 "max_io_qpairs_per_ctrlr": 127, 00:25:00.717 "in_capsule_data_size": 4096, 00:25:00.717 "max_io_size": 131072, 00:25:00.717 "io_unit_size": 131072, 00:25:00.717 "max_aq_depth": 128, 00:25:00.717 "num_shared_buffers": 511, 00:25:00.717 "buf_cache_size": 4294967295, 00:25:00.717 "dif_insert_or_strip": false, 00:25:00.717 "zcopy": false, 00:25:00.717 "c2h_success": false, 00:25:00.717 "sock_priority": 0, 00:25:00.717 "abort_timeout_sec": 1, 00:25:00.717 "ack_timeout": 0, 00:25:00.717 "data_wr_pool_size": 0 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_create_subsystem", 00:25:00.717 "params": { 00:25:00.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.717 "allow_any_host": false, 00:25:00.717 "serial_number": "00000000000000000000", 00:25:00.717 "model_number": "SPDK bdev Controller", 00:25:00.717 "max_namespaces": 32, 00:25:00.717 "min_cntlid": 1, 00:25:00.717 "max_cntlid": 65519, 00:25:00.717 "ana_reporting": false 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_subsystem_add_host", 00:25:00.717 "params": { 00:25:00.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.717 "host": "nqn.2016-06.io.spdk:host1", 00:25:00.717 "psk": "key0" 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_subsystem_add_ns", 00:25:00.717 "params": { 00:25:00.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.717 "namespace": { 00:25:00.717 "nsid": 1, 00:25:00.717 "bdev_name": "malloc0", 00:25:00.717 "nguid": "A97A87959ED24081A6D8825649567A40", 00:25:00.717 "uuid": "a97a8795-9ed2-4081-a6d8-825649567a40", 00:25:00.717 "no_auto_visible": false 00:25:00.717 } 00:25:00.717 } 00:25:00.717 }, 00:25:00.717 { 00:25:00.717 "method": "nvmf_subsystem_add_listener", 00:25:00.717 "params": { 00:25:00.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.717 "listen_address": { 00:25:00.717 "trtype": "TCP", 00:25:00.717 "adrfam": "IPv4", 00:25:00.717 "traddr": "10.0.0.2", 00:25:00.717 "trsvcid": "4420" 00:25:00.717 }, 00:25:00.717 "secure_channel": false, 00:25:00.717 "sock_impl": "ssl" 00:25:00.717 } 00:25:00.717 } 00:25:00.717 ] 00:25:00.717 } 00:25:00.717 ] 00:25:00.717 }' 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=400054 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 400054 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 400054 ']' 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.717 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.978 [2024-10-08 17:40:52.767939] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:00.978 [2024-10-08 17:40:52.768009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.978 [2024-10-08 17:40:52.855765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.978 [2024-10-08 17:40:52.922632] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.978 [2024-10-08 17:40:52.922669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.978 [2024-10-08 17:40:52.922674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.978 [2024-10-08 17:40:52.922679] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.978 [2024-10-08 17:40:52.922687] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.978 [2024-10-08 17:40:52.923186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.238 [2024-10-08 17:40:53.132934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.238 [2024-10-08 17:40:53.164947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.238 [2024-10-08 17:40:53.165138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=400395 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 400395 /var/tmp/bdevperf.sock 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 400395 ']' 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.809 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:01.809 "subsystems": [ 00:25:01.809 { 00:25:01.809 "subsystem": "keyring", 00:25:01.809 "config": [ 00:25:01.809 { 00:25:01.809 "method": "keyring_file_add_key", 00:25:01.809 "params": { 00:25:01.809 "name": "key0", 00:25:01.809 "path": "/tmp/tmp.OQboOdiAzi" 00:25:01.809 } 00:25:01.809 } 00:25:01.809 ] 00:25:01.809 }, 00:25:01.809 { 00:25:01.809 "subsystem": "iobuf", 00:25:01.809 "config": [ 00:25:01.809 { 00:25:01.809 "method": "iobuf_set_options", 00:25:01.809 "params": { 00:25:01.809 "small_pool_count": 8192, 00:25:01.809 "large_pool_count": 1024, 00:25:01.809 "small_bufsize": 8192, 00:25:01.809 "large_bufsize": 135168 00:25:01.809 } 00:25:01.809 } 00:25:01.809 ] 00:25:01.809 }, 00:25:01.809 { 00:25:01.809 "subsystem": "sock", 00:25:01.809 "config": [ 00:25:01.809 { 00:25:01.809 "method": "sock_set_default_impl", 00:25:01.810 "params": { 00:25:01.810 "impl_name": "posix" 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "sock_impl_set_options", 00:25:01.810 "params": { 00:25:01.810 "impl_name": "ssl", 00:25:01.810 "recv_buf_size": 4096, 00:25:01.810 "send_buf_size": 4096, 00:25:01.810 "enable_recv_pipe": true, 00:25:01.810 "enable_quickack": false, 00:25:01.810 "enable_placement_id": 0, 00:25:01.810 "enable_zerocopy_send_server": true, 00:25:01.810 "enable_zerocopy_send_client": false, 00:25:01.810 "zerocopy_threshold": 0, 00:25:01.810 "tls_version": 0, 00:25:01.810 "enable_ktls": false 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "sock_impl_set_options", 00:25:01.810 "params": { 00:25:01.810 "impl_name": "posix", 00:25:01.810 "recv_buf_size": 2097152, 00:25:01.810 "send_buf_size": 2097152, 00:25:01.810 "enable_recv_pipe": true, 00:25:01.810 "enable_quickack": false, 00:25:01.810 "enable_placement_id": 0, 00:25:01.810 "enable_zerocopy_send_server": true, 00:25:01.810 "enable_zerocopy_send_client": false, 00:25:01.810 "zerocopy_threshold": 0, 00:25:01.810 "tls_version": 0, 00:25:01.810 "enable_ktls": false 00:25:01.810 } 00:25:01.810 } 00:25:01.810 ] 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "subsystem": "vmd", 00:25:01.810 "config": [] 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "subsystem": "accel", 00:25:01.810 "config": [ 00:25:01.810 { 00:25:01.810 "method": "accel_set_options", 00:25:01.810 "params": { 00:25:01.810 "small_cache_size": 128, 00:25:01.810 "large_cache_size": 16, 00:25:01.810 "task_count": 2048, 00:25:01.810 "sequence_count": 2048, 00:25:01.810 "buf_count": 2048 00:25:01.810 } 00:25:01.810 } 00:25:01.810 ] 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "subsystem": "bdev", 00:25:01.810 "config": [ 00:25:01.810 { 00:25:01.810 "method": "bdev_set_options", 00:25:01.810 "params": { 00:25:01.810 "bdev_io_pool_size": 65535, 00:25:01.810 "bdev_io_cache_size": 256, 00:25:01.810 "bdev_auto_examine": true, 00:25:01.810 "iobuf_small_cache_size": 128, 00:25:01.810 "iobuf_large_cache_size": 16 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_raid_set_options", 00:25:01.810 "params": { 00:25:01.810 "process_window_size_kb": 1024, 00:25:01.810 "process_max_bandwidth_mb_sec": 0 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_iscsi_set_options", 00:25:01.810 "params": { 00:25:01.810 "timeout_sec": 30 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_nvme_set_options", 00:25:01.810 "params": { 00:25:01.810 "action_on_timeout": "none", 00:25:01.810 "timeout_us": 0, 00:25:01.810 "timeout_admin_us": 0, 00:25:01.810 "keep_alive_timeout_ms": 10000, 00:25:01.810 "arbitration_burst": 0, 00:25:01.810 "low_priority_weight": 0, 00:25:01.810 "medium_priority_weight": 0, 00:25:01.810 "high_priority_weight": 0, 00:25:01.810 "nvme_adminq_poll_period_us": 10000, 00:25:01.810 "nvme_ioq_poll_period_us": 0, 00:25:01.810 "io_queue_requests": 512, 00:25:01.810 "delay_cmd_submit": true, 00:25:01.810 "transport_retry_count": 4, 00:25:01.810 "bdev_retry_count": 3, 00:25:01.810 "transport_ack_timeout": 0, 00:25:01.810 "ctrlr_loss_timeout_sec": 0, 00:25:01.810 "reconnect_delay_sec": 0, 00:25:01.810 "fast_io_fail_timeout_sec": 0, 00:25:01.810 "disable_auto_failback": false, 00:25:01.810 "generate_uuids": false, 00:25:01.810 "transport_tos": 0, 00:25:01.810 "nvme_error_stat": false, 00:25:01.810 "rdma_srq_size": 0, 00:25:01.810 "io_path_stat": false, 00:25:01.810 "allow_accel_sequence": false, 00:25:01.810 "rdma_max_cq_size": 0, 00:25:01.810 "rdma_cm_event_timeout_ms": 0, 00:25:01.810 "dhchap_digests": [ 00:25:01.810 "sha256", 00:25:01.810 "sha384", 00:25:01.810 "sha512" 00:25:01.810 ], 00:25:01.810 "dhchap_dhgroups": [ 00:25:01.810 "null", 00:25:01.810 "ffdhe2048", 00:25:01.810 "ffdhe3072", 00:25:01.810 "ffdhe4096", 00:25:01.810 "ffdhe6144", 00:25:01.810 "ffdhe8192" 00:25:01.810 ] 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_nvme_attach_controller", 00:25:01.810 "params": { 00:25:01.810 "name": "nvme0", 00:25:01.810 "trtype": "TCP", 00:25:01.810 "adrfam": "IPv4", 00:25:01.810 "traddr": "10.0.0.2", 00:25:01.810 "trsvcid": "4420", 00:25:01.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.810 "prchk_reftag": false, 00:25:01.810 "prchk_guard": false, 00:25:01.810 "ctrlr_loss_timeout_sec": 0, 00:25:01.810 "reconnect_delay_sec": 0, 00:25:01.810 "fast_io_fail_timeout_sec": 0, 00:25:01.810 "psk": "key0", 00:25:01.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.810 "hdgst": false, 00:25:01.810 "ddgst": false, 00:25:01.810 "multipath": "multipath" 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_nvme_set_hotplug", 00:25:01.810 "params": { 00:25:01.810 "period_us": 100000, 00:25:01.810 "enable": false 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_enable_histogram", 00:25:01.810 "params": { 00:25:01.810 "name": "nvme0n1", 00:25:01.810 "enable": true 00:25:01.810 } 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "method": "bdev_wait_for_examine" 00:25:01.810 } 00:25:01.810 ] 00:25:01.810 }, 00:25:01.810 { 00:25:01.810 "subsystem": "nbd", 00:25:01.810 "config": [] 00:25:01.810 } 00:25:01.810 ] 00:25:01.810 }' 00:25:01.810 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.810 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.810 [2024-10-08 17:40:53.654942] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:01.810 [2024-10-08 17:40:53.655001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400395 ] 00:25:01.810 [2024-10-08 17:40:53.734132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.810 [2024-10-08 17:40:53.787394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.070 [2024-10-08 17:40:53.922106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.640 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.640 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:02.640 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:02.640 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.899 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.899 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.899 Running I/O for 1 seconds... 00:25:03.839 4105.00 IOPS, 16.04 MiB/s 00:25:03.839 Latency(us) 00:25:03.839 [2024-10-08T15:40:55.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.839 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:03.839 Verification LBA range: start 0x0 length 0x2000 00:25:03.839 nvme0n1 : 1.01 4183.93 16.34 0.00 0.00 30409.62 4860.59 90876.59 00:25:03.839 [2024-10-08T15:40:55.831Z] =================================================================================================================== 00:25:03.839 [2024-10-08T15:40:55.831Z] Total : 4183.93 16.34 0.00 0.00 30409.62 4860.59 90876.59 00:25:03.839 { 00:25:03.839 "results": [ 00:25:03.839 { 00:25:03.839 "job": "nvme0n1", 00:25:03.839 "core_mask": "0x2", 00:25:03.839 "workload": "verify", 00:25:03.839 "status": "finished", 00:25:03.839 "verify_range": { 00:25:03.839 "start": 0, 00:25:03.839 "length": 8192 00:25:03.839 }, 00:25:03.839 "queue_depth": 128, 00:25:03.839 "io_size": 4096, 00:25:03.839 "runtime": 1.011728, 00:25:03.839 "iops": 4183.9308588869735, 00:25:03.839 "mibps": 16.34347991752724, 00:25:03.839 "io_failed": 0, 00:25:03.839 "io_timeout": 0, 00:25:03.839 "avg_latency_us": 30409.62454366485, 00:25:03.839 "min_latency_us": 4860.586666666667, 00:25:03.839 "max_latency_us": 90876.58666666667 00:25:03.839 } 00:25:03.839 ], 00:25:03.839 "core_count": 1 00:25:03.839 } 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:03.839 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:03.839 nvmf_trace.0 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 400395 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 400395 ']' 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 400395 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400395 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400395' 00:25:04.100 killing process with pid 400395 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 400395 00:25:04.100 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.100 00:25:04.100 Latency(us) 00:25:04.100 [2024-10-08T15:40:56.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.100 [2024-10-08T15:40:56.092Z] =================================================================================================================== 00:25:04.100 [2024-10-08T15:40:56.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.100 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 400395 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.100 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.100 rmmod nvme_tcp 00:25:04.100 rmmod nvme_fabrics 00:25:04.361 rmmod nvme_keyring 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 400054 ']' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 400054 ']' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400054' 00:25:04.361 killing process with pid 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 400054 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.361 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GFGlMms9ws /tmp/tmp.PVy2cOApQe /tmp/tmp.OQboOdiAzi 00:25:06.908 00:25:06.908 real 1m29.460s 00:25:06.908 user 2m22.834s 00:25:06.908 sys 0m25.948s 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.908 ************************************ 00:25:06.908 END TEST nvmf_tls 00:25:06.908 ************************************ 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:06.908 ************************************ 00:25:06.908 START TEST nvmf_fips 00:25:06.908 ************************************ 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:06.908 * Looking for test storage... 00:25:06.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:06.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.908 --rc genhtml_branch_coverage=1 00:25:06.908 --rc genhtml_function_coverage=1 00:25:06.908 --rc genhtml_legend=1 00:25:06.908 --rc geninfo_all_blocks=1 00:25:06.908 --rc geninfo_unexecuted_blocks=1 00:25:06.908 00:25:06.908 ' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:06.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.908 --rc genhtml_branch_coverage=1 00:25:06.908 --rc genhtml_function_coverage=1 00:25:06.908 --rc genhtml_legend=1 00:25:06.908 --rc geninfo_all_blocks=1 00:25:06.908 --rc geninfo_unexecuted_blocks=1 00:25:06.908 00:25:06.908 ' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:06.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.908 --rc genhtml_branch_coverage=1 00:25:06.908 --rc genhtml_function_coverage=1 00:25:06.908 --rc genhtml_legend=1 00:25:06.908 --rc geninfo_all_blocks=1 00:25:06.908 --rc geninfo_unexecuted_blocks=1 00:25:06.908 00:25:06.908 ' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:06.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.908 --rc genhtml_branch_coverage=1 00:25:06.908 --rc genhtml_function_coverage=1 00:25:06.908 --rc genhtml_legend=1 00:25:06.908 --rc geninfo_all_blocks=1 00:25:06.908 --rc geninfo_unexecuted_blocks=1 00:25:06.908 00:25:06.908 ' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.908 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:06.909 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:07.170 Error setting digest 00:25:07.170 40D28ACD997F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:07.170 40D28ACD997F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.170 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.316 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:15.317 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:15.317 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:15.317 Found net devices under 0000:31:00.0: cvl_0_0 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:15.317 Found net devices under 0000:31:00.1: cvl_0_1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:25:15.317 00:25:15.317 --- 10.0.0.2 ping statistics --- 00:25:15.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.317 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:15.317 00:25:15.317 --- 10.0.0.1 ping statistics --- 00:25:15.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.317 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=405174 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 405174 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 405174 ']' 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.317 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.318 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.318 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.318 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:15.318 [2024-10-08 17:41:06.766728] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:15.318 [2024-10-08 17:41:06.766798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.318 [2024-10-08 17:41:06.857519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.318 [2024-10-08 17:41:06.949519] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.318 [2024-10-08 17:41:06.949573] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.318 [2024-10-08 17:41:06.949581] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.318 [2024-10-08 17:41:06.949588] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.318 [2024-10-08 17:41:06.949600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.318 [2024-10-08 17:41:06.950419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.579 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.579 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:15.579 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:15.579 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.579 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ftf 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ftf 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ftf 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ftf 00:25:15.839 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.839 [2024-10-08 17:41:07.779478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.839 [2024-10-08 17:41:07.795446] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:15.839 [2024-10-08 17:41:07.795765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.101 malloc0 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=405525 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 405525 /var/tmp/bdevperf.sock 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 405525 ']' 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.101 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:16.101 [2024-10-08 17:41:07.948274] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:16.101 [2024-10-08 17:41:07.948351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405525 ] 00:25:16.101 [2024-10-08 17:41:08.031159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.361 [2024-10-08 17:41:08.121972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.933 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.933 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:16.933 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ftf 00:25:17.194 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:17.194 [2024-10-08 17:41:09.114762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.455 TLSTESTn1 00:25:17.455 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.455 Running I/O for 10 seconds... 00:25:19.336 2552.00 IOPS, 9.97 MiB/s [2024-10-08T15:41:12.715Z] 2515.50 IOPS, 9.83 MiB/s [2024-10-08T15:41:13.654Z] 2172.33 IOPS, 8.49 MiB/s [2024-10-08T15:41:14.595Z] 3005.50 IOPS, 11.74 MiB/s [2024-10-08T15:41:15.535Z] 3371.20 IOPS, 13.17 MiB/s [2024-10-08T15:41:16.476Z] 3214.00 IOPS, 12.55 MiB/s [2024-10-08T15:41:17.417Z] 3154.00 IOPS, 12.32 MiB/s [2024-10-08T15:41:18.357Z] 3206.38 IOPS, 12.52 MiB/s [2024-10-08T15:41:19.739Z] 3324.44 IOPS, 12.99 MiB/s [2024-10-08T15:41:19.739Z] 3181.10 IOPS, 12.43 MiB/s 00:25:27.747 Latency(us) 00:25:27.747 [2024-10-08T15:41:19.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.747 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:27.747 Verification LBA range: start 0x0 length 0x2000 00:25:27.747 TLSTESTn1 : 10.10 3163.53 12.36 0.00 0.00 40288.15 6553.60 104420.69 00:25:27.747 [2024-10-08T15:41:19.739Z] =================================================================================================================== 00:25:27.748 [2024-10-08T15:41:19.740Z] Total : 3163.53 12.36 0.00 0.00 40288.15 6553.60 104420.69 00:25:27.748 { 00:25:27.748 "results": [ 00:25:27.748 { 00:25:27.748 "job": "TLSTESTn1", 00:25:27.748 "core_mask": "0x4", 00:25:27.748 "workload": "verify", 00:25:27.748 "status": "finished", 00:25:27.748 "verify_range": { 00:25:27.748 "start": 0, 00:25:27.748 "length": 8192 00:25:27.748 }, 00:25:27.748 "queue_depth": 128, 00:25:27.748 "io_size": 4096, 00:25:27.748 "runtime": 10.096002, 00:25:27.748 "iops": 3163.5294842453477, 00:25:27.748 "mibps": 12.35753704783339, 00:25:27.748 "io_failed": 0, 00:25:27.748 "io_timeout": 0, 00:25:27.748 "avg_latency_us": 40288.15032656, 00:25:27.748 "min_latency_us": 6553.6, 00:25:27.748 "max_latency_us": 104420.69333333333 00:25:27.748 } 00:25:27.748 ], 00:25:27.748 "core_count": 1 00:25:27.748 } 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:27.748 nvmf_trace.0 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 405525 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 405525 ']' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 405525 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405525 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405525' 00:25:27.748 killing process with pid 405525 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 405525 00:25:27.748 Received shutdown signal, test time was about 10.000000 seconds 00:25:27.748 00:25:27.748 Latency(us) 00:25:27.748 [2024-10-08T15:41:19.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.748 [2024-10-08T15:41:19.740Z] =================================================================================================================== 00:25:27.748 [2024-10-08T15:41:19.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 405525 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.748 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.009 rmmod nvme_tcp 00:25:28.009 rmmod nvme_fabrics 00:25:28.009 rmmod nvme_keyring 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 405174 ']' 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 405174 ']' 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405174' 00:25:28.009 killing process with pid 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 405174 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:28.009 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:28.271 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.271 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.271 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.271 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.271 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ftf 00:25:30.183 00:25:30.183 real 0m23.601s 00:25:30.183 user 0m25.422s 00:25:30.183 sys 0m9.603s 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.183 ************************************ 00:25:30.183 END TEST nvmf_fips 00:25:30.183 ************************************ 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.183 17:41:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:30.183 ************************************ 00:25:30.184 START TEST nvmf_control_msg_list 00:25:30.184 ************************************ 00:25:30.184 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:30.446 * Looking for test storage... 00:25:30.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:30.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.446 --rc genhtml_branch_coverage=1 00:25:30.446 --rc genhtml_function_coverage=1 00:25:30.446 --rc genhtml_legend=1 00:25:30.446 --rc geninfo_all_blocks=1 00:25:30.446 --rc geninfo_unexecuted_blocks=1 00:25:30.446 00:25:30.446 ' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:30.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.446 --rc genhtml_branch_coverage=1 00:25:30.446 --rc genhtml_function_coverage=1 00:25:30.446 --rc genhtml_legend=1 00:25:30.446 --rc geninfo_all_blocks=1 00:25:30.446 --rc geninfo_unexecuted_blocks=1 00:25:30.446 00:25:30.446 ' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:30.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.446 --rc genhtml_branch_coverage=1 00:25:30.446 --rc genhtml_function_coverage=1 00:25:30.446 --rc genhtml_legend=1 00:25:30.446 --rc geninfo_all_blocks=1 00:25:30.446 --rc geninfo_unexecuted_blocks=1 00:25:30.446 00:25:30.446 ' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:30.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.446 --rc genhtml_branch_coverage=1 00:25:30.446 --rc genhtml_function_coverage=1 00:25:30.446 --rc genhtml_legend=1 00:25:30.446 --rc geninfo_all_blocks=1 00:25:30.446 --rc geninfo_unexecuted_blocks=1 00:25:30.446 00:25:30.446 ' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.446 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.447 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:38.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:38.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:38.587 Found net devices under 0000:31:00.0: cvl_0_0 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:38.587 Found net devices under 0000:31:00.1: cvl_0_1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:25:38.587 00:25:38.587 --- 10.0.0.2 ping statistics --- 00:25:38.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.587 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:25:38.587 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:25:38.587 00:25:38.587 --- 10.0.0.1 ping statistics --- 00:25:38.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.587 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=411968 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 411968 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 411968 ']' 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.588 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.588 [2024-10-08 17:41:29.833270] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:38.588 [2024-10-08 17:41:29.833334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.588 [2024-10-08 17:41:29.925396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.588 [2024-10-08 17:41:30.022277] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.588 [2024-10-08 17:41:30.022342] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.588 [2024-10-08 17:41:30.022350] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.588 [2024-10-08 17:41:30.022358] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.588 [2024-10-08 17:41:30.022364] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.588 [2024-10-08 17:41:30.023192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 [2024-10-08 17:41:30.694015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 Malloc0 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 [2024-10-08 17:41:30.760458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=412289 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=412290 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=412291 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 412289 00:25:38.850 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.850 [2024-10-08 17:41:30.831052] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:38.850 [2024-10-08 17:41:30.841025] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:38.850 [2024-10-08 17:41:30.841417] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:40.241 Initializing NVMe Controllers 00:25:40.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:40.241 Initialization complete. Launching workers. 00:25:40.241 ======================================================== 00:25:40.241 Latency(us) 00:25:40.241 Device Information : IOPS MiB/s Average min max 00:25:40.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1603.00 6.26 623.81 294.42 946.81 00:25:40.241 ======================================================== 00:25:40.241 Total : 1603.00 6.26 623.81 294.42 946.81 00:25:40.241 00:25:40.241 [2024-10-08 17:41:32.024972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193d2b0 is same with the state(6) to be set 00:25:40.241 Initializing NVMe Controllers 00:25:40.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:40.241 Initialization complete. Launching workers. 00:25:40.241 ======================================================== 00:25:40.241 Latency(us) 00:25:40.241 Device Information : IOPS MiB/s Average min max 00:25:40.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1650.00 6.45 605.96 136.46 806.25 00:25:40.241 ======================================================== 00:25:40.241 Total : 1650.00 6.45 605.96 136.46 806.25 00:25:40.241 00:25:40.241 Initializing NVMe Controllers 00:25:40.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:40.241 Initialization complete. Launching workers. 00:25:40.241 ======================================================== 00:25:40.241 Latency(us) 00:25:40.241 Device Information : IOPS MiB/s Average min max 00:25:40.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40898.70 40775.09 41040.10 00:25:40.241 ======================================================== 00:25:40.241 Total : 25.00 0.10 40898.70 40775.09 41040.10 00:25:40.241 00:25:40.241 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 412290 00:25:40.241 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 412291 00:25:40.241 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.242 rmmod nvme_tcp 00:25:40.242 rmmod nvme_fabrics 00:25:40.242 rmmod nvme_keyring 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 411968 ']' 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 411968 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 411968 ']' 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 411968 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.242 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 411968 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 411968' 00:25:40.505 killing process with pid 411968 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 411968 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 411968 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.505 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.053 00:25:43.053 real 0m12.395s 00:25:43.053 user 0m8.345s 00:25:43.053 sys 0m6.457s 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:43.053 ************************************ 00:25:43.053 END TEST nvmf_control_msg_list 00:25:43.053 ************************************ 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:43.053 ************************************ 00:25:43.053 START TEST nvmf_wait_for_buf 00:25:43.053 ************************************ 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:43.053 * Looking for test storage... 00:25:43.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:43.053 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:43.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.054 --rc genhtml_branch_coverage=1 00:25:43.054 --rc genhtml_function_coverage=1 00:25:43.054 --rc genhtml_legend=1 00:25:43.054 --rc geninfo_all_blocks=1 00:25:43.054 --rc geninfo_unexecuted_blocks=1 00:25:43.054 00:25:43.054 ' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:43.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.054 --rc genhtml_branch_coverage=1 00:25:43.054 --rc genhtml_function_coverage=1 00:25:43.054 --rc genhtml_legend=1 00:25:43.054 --rc geninfo_all_blocks=1 00:25:43.054 --rc geninfo_unexecuted_blocks=1 00:25:43.054 00:25:43.054 ' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:43.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.054 --rc genhtml_branch_coverage=1 00:25:43.054 --rc genhtml_function_coverage=1 00:25:43.054 --rc genhtml_legend=1 00:25:43.054 --rc geninfo_all_blocks=1 00:25:43.054 --rc geninfo_unexecuted_blocks=1 00:25:43.054 00:25:43.054 ' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:43.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.054 --rc genhtml_branch_coverage=1 00:25:43.054 --rc genhtml_function_coverage=1 00:25:43.054 --rc genhtml_legend=1 00:25:43.054 --rc geninfo_all_blocks=1 00:25:43.054 --rc geninfo_unexecuted_blocks=1 00:25:43.054 00:25:43.054 ' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.054 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.055 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:43.055 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:43.055 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.055 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:51.196 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:51.196 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:51.196 Found net devices under 0000:31:00.0: cvl_0_0 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:51.196 Found net devices under 0000:31:00.1: cvl_0_1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:51.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:25:51.196 00:25:51.196 --- 10.0.0.2 ping statistics --- 00:25:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.196 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:25:51.196 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:51.196 00:25:51.196 --- 10.0.0.1 ping statistics --- 00:25:51.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.197 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=416870 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 416870 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 416870 ']' 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.197 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.197 [2024-10-08 17:41:42.637050] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:25:51.197 [2024-10-08 17:41:42.637116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.197 [2024-10-08 17:41:42.724799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.197 [2024-10-08 17:41:42.819184] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.197 [2024-10-08 17:41:42.819244] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.197 [2024-10-08 17:41:42.819252] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.197 [2024-10-08 17:41:42.819259] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.197 [2024-10-08 17:41:42.819266] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.197 [2024-10-08 17:41:42.820114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.458 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.458 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:51.458 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:51.458 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.718 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 Malloc0 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 [2024-10-08 17:41:43.616064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.719 [2024-10-08 17:41:43.652359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.719 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.979 [2024-10-08 17:41:43.751083] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:53.364 Initializing NVMe Controllers 00:25:53.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:53.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:53.364 Initialization complete. Launching workers. 00:25:53.364 ======================================================== 00:25:53.364 Latency(us) 00:25:53.364 Device Information : IOPS MiB/s Average min max 00:25:53.364 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.68 8005.78 63854.95 00:25:53.364 ======================================================== 00:25:53.364 Total : 129.00 16.12 32294.68 8005.78 63854.95 00:25:53.364 00:25:53.364 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:53.364 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:53.364 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.364 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.625 rmmod nvme_tcp 00:25:53.625 rmmod nvme_fabrics 00:25:53.625 rmmod nvme_keyring 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 416870 ']' 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 416870 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 416870 ']' 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 416870 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 416870 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:53.625 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:53.626 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 416870' 00:25:53.626 killing process with pid 416870 00:25:53.626 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 416870 00:25:53.626 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 416870 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.888 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.801 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.801 00:25:55.801 real 0m13.147s 00:25:55.801 user 0m5.421s 00:25:55.801 sys 0m6.290s 00:25:55.801 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:55.801 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.801 ************************************ 00:25:55.801 END TEST nvmf_wait_for_buf 00:25:55.801 ************************************ 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.062 17:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:04.207 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:04.207 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:04.207 Found net devices under 0000:31:00.0: cvl_0_0 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:04.207 Found net devices under 0000:31:00.1: cvl_0_1 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:04.207 ************************************ 00:26:04.207 START TEST nvmf_perf_adq 00:26:04.207 ************************************ 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:04.207 * Looking for test storage... 00:26:04.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:04.207 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.208 --rc genhtml_branch_coverage=1 00:26:04.208 --rc genhtml_function_coverage=1 00:26:04.208 --rc genhtml_legend=1 00:26:04.208 --rc geninfo_all_blocks=1 00:26:04.208 --rc geninfo_unexecuted_blocks=1 00:26:04.208 00:26:04.208 ' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.208 --rc genhtml_branch_coverage=1 00:26:04.208 --rc genhtml_function_coverage=1 00:26:04.208 --rc genhtml_legend=1 00:26:04.208 --rc geninfo_all_blocks=1 00:26:04.208 --rc geninfo_unexecuted_blocks=1 00:26:04.208 00:26:04.208 ' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.208 --rc genhtml_branch_coverage=1 00:26:04.208 --rc genhtml_function_coverage=1 00:26:04.208 --rc genhtml_legend=1 00:26:04.208 --rc geninfo_all_blocks=1 00:26:04.208 --rc geninfo_unexecuted_blocks=1 00:26:04.208 00:26:04.208 ' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.208 --rc genhtml_branch_coverage=1 00:26:04.208 --rc genhtml_function_coverage=1 00:26:04.208 --rc genhtml_legend=1 00:26:04.208 --rc geninfo_all_blocks=1 00:26:04.208 --rc geninfo_unexecuted_blocks=1 00:26:04.208 00:26:04.208 ' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.208 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.796 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.796 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.796 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.796 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:10.796 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:12.711 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:16.012 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.301 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:21.302 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:21.302 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:21.302 Found net devices under 0000:31:00.0: cvl_0_0 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:21.302 Found net devices under 0000:31:00.1: cvl_0_1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:21.302 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:21.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:26:21.302 00:26:21.302 --- 10.0.0.2 ping statistics --- 00:26:21.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.302 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:26:21.302 00:26:21.302 --- 10.0.0.1 ping statistics --- 00:26:21.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.302 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=428057 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 428057 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 428057 ']' 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:21.302 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.302 [2024-10-08 17:42:13.128125] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:26:21.302 [2024-10-08 17:42:13.128190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.302 [2024-10-08 17:42:13.219253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.564 [2024-10-08 17:42:13.318308] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.564 [2024-10-08 17:42:13.318371] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.564 [2024-10-08 17:42:13.318380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.564 [2024-10-08 17:42:13.318387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.564 [2024-10-08 17:42:13.318393] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.564 [2024-10-08 17:42:13.320761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.564 [2024-10-08 17:42:13.320922] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.564 [2024-10-08 17:42:13.321055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.564 [2024-10-08 17:42:13.321084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.136 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.136 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 [2024-10-08 17:42:14.166046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 Malloc1 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 [2024-10-08 17:42:14.231916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=428301 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:22.398 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:24.313 "tick_rate": 2400000000, 00:26:24.313 "poll_groups": [ 00:26:24.313 { 00:26:24.313 "name": "nvmf_tgt_poll_group_000", 00:26:24.313 "admin_qpairs": 1, 00:26:24.313 "io_qpairs": 1, 00:26:24.313 "current_admin_qpairs": 1, 00:26:24.313 "current_io_qpairs": 1, 00:26:24.313 "pending_bdev_io": 0, 00:26:24.313 "completed_nvme_io": 17233, 00:26:24.313 "transports": [ 00:26:24.313 { 00:26:24.313 "trtype": "TCP" 00:26:24.313 } 00:26:24.313 ] 00:26:24.313 }, 00:26:24.313 { 00:26:24.313 "name": "nvmf_tgt_poll_group_001", 00:26:24.313 "admin_qpairs": 0, 00:26:24.313 "io_qpairs": 1, 00:26:24.313 "current_admin_qpairs": 0, 00:26:24.313 "current_io_qpairs": 1, 00:26:24.313 "pending_bdev_io": 0, 00:26:24.313 "completed_nvme_io": 18075, 00:26:24.313 "transports": [ 00:26:24.313 { 00:26:24.313 "trtype": "TCP" 00:26:24.313 } 00:26:24.313 ] 00:26:24.313 }, 00:26:24.313 { 00:26:24.313 "name": "nvmf_tgt_poll_group_002", 00:26:24.313 "admin_qpairs": 0, 00:26:24.313 "io_qpairs": 1, 00:26:24.313 "current_admin_qpairs": 0, 00:26:24.313 "current_io_qpairs": 1, 00:26:24.313 "pending_bdev_io": 0, 00:26:24.313 "completed_nvme_io": 19658, 00:26:24.313 "transports": [ 00:26:24.313 { 00:26:24.313 "trtype": "TCP" 00:26:24.313 } 00:26:24.313 ] 00:26:24.313 }, 00:26:24.313 { 00:26:24.313 "name": "nvmf_tgt_poll_group_003", 00:26:24.313 "admin_qpairs": 0, 00:26:24.313 "io_qpairs": 1, 00:26:24.313 "current_admin_qpairs": 0, 00:26:24.313 "current_io_qpairs": 1, 00:26:24.313 "pending_bdev_io": 0, 00:26:24.313 "completed_nvme_io": 17253, 00:26:24.313 "transports": [ 00:26:24.313 { 00:26:24.313 "trtype": "TCP" 00:26:24.313 } 00:26:24.313 ] 00:26:24.313 } 00:26:24.313 ] 00:26:24.313 }' 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:24.313 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:24.574 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:24.574 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:24.574 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 428301 00:26:32.710 Initializing NVMe Controllers 00:26:32.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:32.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:32.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:32.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:32.710 Initialization complete. Launching workers. 00:26:32.710 ======================================================== 00:26:32.710 Latency(us) 00:26:32.710 Device Information : IOPS MiB/s Average min max 00:26:32.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12426.40 48.54 5150.92 1309.69 10288.58 00:26:32.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13636.30 53.27 4693.19 1135.25 14005.62 00:26:32.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13558.40 52.96 4720.79 1300.90 12486.69 00:26:32.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12843.00 50.17 4983.88 1313.20 13127.91 00:26:32.710 ======================================================== 00:26:32.710 Total : 52464.09 204.94 4879.90 1135.25 14005.62 00:26:32.710 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.710 rmmod nvme_tcp 00:26:32.710 rmmod nvme_fabrics 00:26:32.710 rmmod nvme_keyring 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 428057 ']' 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 428057 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 428057 ']' 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 428057 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 428057 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.710 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 428057' 00:26:32.711 killing process with pid 428057 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 428057 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 428057 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:32.711 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:26:32.971 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.971 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.971 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.971 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.971 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.887 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.887 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:34.887 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:34.887 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:36.800 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:38.710 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:44.003 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:44.004 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:44.004 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:44.004 Found net devices under 0000:31:00.0: cvl_0_0 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:44.004 Found net devices under 0000:31:00.1: cvl_0_1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:26:44.004 00:26:44.004 --- 10.0.0.2 ping statistics --- 00:26:44.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.004 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:26:44.004 00:26:44.004 --- 10.0.0.1 ping statistics --- 00:26:44.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.004 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.004 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:44.005 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:44.267 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:44.267 net.core.busy_poll = 1 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:44.267 net.core.busy_read = 1 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:44.267 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=432967 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 432967 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 432967 ']' 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.528 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 [2024-10-08 17:42:36.349274] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:26:44.529 [2024-10-08 17:42:36.349342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.529 [2024-10-08 17:42:36.440787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.790 [2024-10-08 17:42:36.537360] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.790 [2024-10-08 17:42:36.537425] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.790 [2024-10-08 17:42:36.537434] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.790 [2024-10-08 17:42:36.537441] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.790 [2024-10-08 17:42:36.537448] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.790 [2024-10-08 17:42:36.539527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.790 [2024-10-08 17:42:36.539693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.790 [2024-10-08 17:42:36.539853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.790 [2024-10-08 17:42:36.539854] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:45.373 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 [2024-10-08 17:42:37.382525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 Malloc1 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.635 [2024-10-08 17:42:37.448189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=433140 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:45.635 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:47.551 "tick_rate": 2400000000, 00:26:47.551 "poll_groups": [ 00:26:47.551 { 00:26:47.551 "name": "nvmf_tgt_poll_group_000", 00:26:47.551 "admin_qpairs": 1, 00:26:47.551 "io_qpairs": 3, 00:26:47.551 "current_admin_qpairs": 1, 00:26:47.551 "current_io_qpairs": 3, 00:26:47.551 "pending_bdev_io": 0, 00:26:47.551 "completed_nvme_io": 26751, 00:26:47.551 "transports": [ 00:26:47.551 { 00:26:47.551 "trtype": "TCP" 00:26:47.551 } 00:26:47.551 ] 00:26:47.551 }, 00:26:47.551 { 00:26:47.551 "name": "nvmf_tgt_poll_group_001", 00:26:47.551 "admin_qpairs": 0, 00:26:47.551 "io_qpairs": 1, 00:26:47.551 "current_admin_qpairs": 0, 00:26:47.551 "current_io_qpairs": 1, 00:26:47.551 "pending_bdev_io": 0, 00:26:47.551 "completed_nvme_io": 24302, 00:26:47.551 "transports": [ 00:26:47.551 { 00:26:47.551 "trtype": "TCP" 00:26:47.551 } 00:26:47.551 ] 00:26:47.551 }, 00:26:47.551 { 00:26:47.551 "name": "nvmf_tgt_poll_group_002", 00:26:47.551 "admin_qpairs": 0, 00:26:47.551 "io_qpairs": 0, 00:26:47.551 "current_admin_qpairs": 0, 00:26:47.551 "current_io_qpairs": 0, 00:26:47.551 "pending_bdev_io": 0, 00:26:47.551 "completed_nvme_io": 0, 00:26:47.551 "transports": [ 00:26:47.551 { 00:26:47.551 "trtype": "TCP" 00:26:47.551 } 00:26:47.551 ] 00:26:47.551 }, 00:26:47.551 { 00:26:47.551 "name": "nvmf_tgt_poll_group_003", 00:26:47.551 "admin_qpairs": 0, 00:26:47.551 "io_qpairs": 0, 00:26:47.551 "current_admin_qpairs": 0, 00:26:47.551 "current_io_qpairs": 0, 00:26:47.551 "pending_bdev_io": 0, 00:26:47.551 "completed_nvme_io": 0, 00:26:47.551 "transports": [ 00:26:47.551 { 00:26:47.551 "trtype": "TCP" 00:26:47.551 } 00:26:47.551 ] 00:26:47.551 } 00:26:47.551 ] 00:26:47.551 }' 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:47.551 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 433140 00:26:55.686 Initializing NVMe Controllers 00:26:55.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:55.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:55.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:55.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:55.686 Initialization complete. Launching workers. 00:26:55.686 ======================================================== 00:26:55.686 Latency(us) 00:26:55.686 Device Information : IOPS MiB/s Average min max 00:26:55.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17052.26 66.61 3761.33 1093.71 46174.48 00:26:55.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5952.38 23.25 10751.99 1228.19 59557.86 00:26:55.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8338.23 32.57 7675.13 960.98 60460.75 00:26:55.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5941.78 23.21 10775.96 1287.99 57299.41 00:26:55.686 ======================================================== 00:26:55.686 Total : 37284.66 145.64 6870.51 960.98 60460.75 00:26:55.686 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.686 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.686 rmmod nvme_tcp 00:26:55.686 rmmod nvme_fabrics 00:26:55.947 rmmod nvme_keyring 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 432967 ']' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 432967 ']' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 432967' 00:26:55.947 killing process with pid 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 432967 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.947 17:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.246 17:42:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:59.246 00:26:59.246 real 0m55.840s 00:26:59.246 user 2m49.616s 00:26:59.246 sys 0m12.951s 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.246 ************************************ 00:26:59.246 END TEST nvmf_perf_adq 00:26:59.246 ************************************ 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:59.246 ************************************ 00:26:59.246 START TEST nvmf_shutdown 00:26:59.246 ************************************ 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:59.246 * Looking for test storage... 00:26:59.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:59.246 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:59.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.508 --rc genhtml_branch_coverage=1 00:26:59.508 --rc genhtml_function_coverage=1 00:26:59.508 --rc genhtml_legend=1 00:26:59.508 --rc geninfo_all_blocks=1 00:26:59.508 --rc geninfo_unexecuted_blocks=1 00:26:59.508 00:26:59.508 ' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:59.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.508 --rc genhtml_branch_coverage=1 00:26:59.508 --rc genhtml_function_coverage=1 00:26:59.508 --rc genhtml_legend=1 00:26:59.508 --rc geninfo_all_blocks=1 00:26:59.508 --rc geninfo_unexecuted_blocks=1 00:26:59.508 00:26:59.508 ' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:59.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.508 --rc genhtml_branch_coverage=1 00:26:59.508 --rc genhtml_function_coverage=1 00:26:59.508 --rc genhtml_legend=1 00:26:59.508 --rc geninfo_all_blocks=1 00:26:59.508 --rc geninfo_unexecuted_blocks=1 00:26:59.508 00:26:59.508 ' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:59.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.508 --rc genhtml_branch_coverage=1 00:26:59.508 --rc genhtml_function_coverage=1 00:26:59.508 --rc genhtml_legend=1 00:26:59.508 --rc geninfo_all_blocks=1 00:26:59.508 --rc geninfo_unexecuted_blocks=1 00:26:59.508 00:26:59.508 ' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:59.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:59.508 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 ************************************ 00:26:59.509 START TEST nvmf_shutdown_tc1 00:26:59.509 ************************************ 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.509 17:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.655 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:07.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:07.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:07.656 Found net devices under 0000:31:00.0: cvl_0_0 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:07.656 Found net devices under 0000:31:00.1: cvl_0_1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.656 17:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:27:07.656 00:27:07.656 --- 10.0.0.2 ping statistics --- 00:27:07.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.656 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:27:07.656 00:27:07.656 --- 10.0.0.1 ping statistics --- 00:27:07.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.656 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=439729 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 439729 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 439729 ']' 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.656 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.657 [2024-10-08 17:42:59.154623] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:07.657 [2024-10-08 17:42:59.154689] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.657 [2024-10-08 17:42:59.221990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.657 [2024-10-08 17:42:59.306306] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.657 [2024-10-08 17:42:59.306360] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.657 [2024-10-08 17:42:59.306368] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.657 [2024-10-08 17:42:59.306374] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.657 [2024-10-08 17:42:59.306379] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.657 [2024-10-08 17:42:59.308510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.657 [2024-10-08 17:42:59.308703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.657 [2024-10-08 17:42:59.308864] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.657 [2024-10-08 17:42:59.308865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.657 [2024-10-08 17:42:59.484877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.657 17:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.657 Malloc1 00:27:07.657 [2024-10-08 17:42:59.598473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.657 Malloc2 00:27:07.919 Malloc3 00:27:07.919 Malloc4 00:27:07.919 Malloc5 00:27:07.919 Malloc6 00:27:07.919 Malloc7 00:27:07.919 Malloc8 00:27:08.180 Malloc9 00:27:08.180 Malloc10 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=440040 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 440040 /var/tmp/bdevperf.sock 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 440040 ']' 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.180 { 00:27:08.180 "params": { 00:27:08.180 "name": "Nvme$subsystem", 00:27:08.180 "trtype": "$TEST_TRANSPORT", 00:27:08.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.180 "adrfam": "ipv4", 00:27:08.180 "trsvcid": "$NVMF_PORT", 00:27:08.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.180 "hdgst": ${hdgst:-false}, 00:27:08.180 "ddgst": ${ddgst:-false} 00:27:08.180 }, 00:27:08.180 "method": "bdev_nvme_attach_controller" 00:27:08.180 } 00:27:08.180 EOF 00:27:08.180 )") 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.180 { 00:27:08.180 "params": { 00:27:08.180 "name": "Nvme$subsystem", 00:27:08.180 "trtype": "$TEST_TRANSPORT", 00:27:08.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.180 "adrfam": "ipv4", 00:27:08.180 "trsvcid": "$NVMF_PORT", 00:27:08.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.180 "hdgst": ${hdgst:-false}, 00:27:08.180 "ddgst": ${ddgst:-false} 00:27:08.180 }, 00:27:08.180 "method": "bdev_nvme_attach_controller" 00:27:08.180 } 00:27:08.180 EOF 00:27:08.180 )") 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.180 { 00:27:08.180 "params": { 00:27:08.180 "name": "Nvme$subsystem", 00:27:08.180 "trtype": "$TEST_TRANSPORT", 00:27:08.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.180 "adrfam": "ipv4", 00:27:08.180 "trsvcid": "$NVMF_PORT", 00:27:08.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.180 "hdgst": ${hdgst:-false}, 00:27:08.180 "ddgst": ${ddgst:-false} 00:27:08.180 }, 00:27:08.180 "method": "bdev_nvme_attach_controller" 00:27:08.180 } 00:27:08.180 EOF 00:27:08.180 )") 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.180 { 00:27:08.180 "params": { 00:27:08.180 "name": "Nvme$subsystem", 00:27:08.180 "trtype": "$TEST_TRANSPORT", 00:27:08.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.180 "adrfam": "ipv4", 00:27:08.180 "trsvcid": "$NVMF_PORT", 00:27:08.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.180 "hdgst": ${hdgst:-false}, 00:27:08.180 "ddgst": ${ddgst:-false} 00:27:08.180 }, 00:27:08.180 "method": "bdev_nvme_attach_controller" 00:27:08.180 } 00:27:08.180 EOF 00:27:08.180 )") 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.180 { 00:27:08.180 "params": { 00:27:08.180 "name": "Nvme$subsystem", 00:27:08.180 "trtype": "$TEST_TRANSPORT", 00:27:08.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.180 "adrfam": "ipv4", 00:27:08.180 "trsvcid": "$NVMF_PORT", 00:27:08.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.180 "hdgst": ${hdgst:-false}, 00:27:08.180 "ddgst": ${ddgst:-false} 00:27:08.180 }, 00:27:08.180 "method": "bdev_nvme_attach_controller" 00:27:08.180 } 00:27:08.180 EOF 00:27:08.180 )") 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.180 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.181 { 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme$subsystem", 00:27:08.181 "trtype": "$TEST_TRANSPORT", 00:27:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "$NVMF_PORT", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.181 "hdgst": ${hdgst:-false}, 00:27:08.181 "ddgst": ${ddgst:-false} 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 } 00:27:08.181 EOF 00:27:08.181 )") 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.181 [2024-10-08 17:43:00.114406] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:08.181 [2024-10-08 17:43:00.114480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.181 { 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme$subsystem", 00:27:08.181 "trtype": "$TEST_TRANSPORT", 00:27:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "$NVMF_PORT", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.181 "hdgst": ${hdgst:-false}, 00:27:08.181 "ddgst": ${ddgst:-false} 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 } 00:27:08.181 EOF 00:27:08.181 )") 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.181 { 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme$subsystem", 00:27:08.181 "trtype": "$TEST_TRANSPORT", 00:27:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "$NVMF_PORT", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.181 "hdgst": ${hdgst:-false}, 00:27:08.181 "ddgst": ${ddgst:-false} 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 } 00:27:08.181 EOF 00:27:08.181 )") 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.181 { 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme$subsystem", 00:27:08.181 "trtype": "$TEST_TRANSPORT", 00:27:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "$NVMF_PORT", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.181 "hdgst": ${hdgst:-false}, 00:27:08.181 "ddgst": ${ddgst:-false} 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 } 00:27:08.181 EOF 00:27:08.181 )") 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:08.181 { 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme$subsystem", 00:27:08.181 "trtype": "$TEST_TRANSPORT", 00:27:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "$NVMF_PORT", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.181 "hdgst": ${hdgst:-false}, 00:27:08.181 "ddgst": ${ddgst:-false} 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 } 00:27:08.181 EOF 00:27:08.181 )") 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:27:08.181 17:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme1", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme2", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme3", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme4", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme5", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme6", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme7", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme8", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme9", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:08.181 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:08.181 "hdgst": false, 00:27:08.181 "ddgst": false 00:27:08.181 }, 00:27:08.181 "method": "bdev_nvme_attach_controller" 00:27:08.181 },{ 00:27:08.181 "params": { 00:27:08.181 "name": "Nvme10", 00:27:08.181 "trtype": "tcp", 00:27:08.181 "traddr": "10.0.0.2", 00:27:08.181 "adrfam": "ipv4", 00:27:08.181 "trsvcid": "4420", 00:27:08.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:08.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:08.182 "hdgst": false, 00:27:08.182 "ddgst": false 00:27:08.182 }, 00:27:08.182 "method": "bdev_nvme_attach_controller" 00:27:08.182 }' 00:27:08.442 [2024-10-08 17:43:00.203214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.442 [2024-10-08 17:43:00.298968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.357 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.357 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 440040 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:10.358 17:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:10.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 440040 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 439729 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:10.931 { 00:27:10.931 "params": { 00:27:10.931 "name": "Nvme$subsystem", 00:27:10.931 "trtype": "$TEST_TRANSPORT", 00:27:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.931 "adrfam": "ipv4", 00:27:10.931 "trsvcid": "$NVMF_PORT", 00:27:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.931 "hdgst": ${hdgst:-false}, 00:27:10.931 "ddgst": ${ddgst:-false} 00:27:10.931 }, 00:27:10.931 "method": "bdev_nvme_attach_controller" 00:27:10.931 } 00:27:10.931 EOF 00:27:10.931 )") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:10.931 { 00:27:10.931 "params": { 00:27:10.931 "name": "Nvme$subsystem", 00:27:10.931 "trtype": "$TEST_TRANSPORT", 00:27:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.931 "adrfam": "ipv4", 00:27:10.931 "trsvcid": "$NVMF_PORT", 00:27:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.931 "hdgst": ${hdgst:-false}, 00:27:10.931 "ddgst": ${ddgst:-false} 00:27:10.931 }, 00:27:10.931 "method": "bdev_nvme_attach_controller" 00:27:10.931 } 00:27:10.931 EOF 00:27:10.931 )") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:10.931 { 00:27:10.931 "params": { 00:27:10.931 "name": "Nvme$subsystem", 00:27:10.931 "trtype": "$TEST_TRANSPORT", 00:27:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.931 "adrfam": "ipv4", 00:27:10.931 "trsvcid": "$NVMF_PORT", 00:27:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.931 "hdgst": ${hdgst:-false}, 00:27:10.931 "ddgst": ${ddgst:-false} 00:27:10.931 }, 00:27:10.931 "method": "bdev_nvme_attach_controller" 00:27:10.931 } 00:27:10.931 EOF 00:27:10.931 )") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:10.931 { 00:27:10.931 "params": { 00:27:10.931 "name": "Nvme$subsystem", 00:27:10.931 "trtype": "$TEST_TRANSPORT", 00:27:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.931 "adrfam": "ipv4", 00:27:10.931 "trsvcid": "$NVMF_PORT", 00:27:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.931 "hdgst": ${hdgst:-false}, 00:27:10.931 "ddgst": ${ddgst:-false} 00:27:10.931 }, 00:27:10.931 "method": "bdev_nvme_attach_controller" 00:27:10.931 } 00:27:10.931 EOF 00:27:10.931 )") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:10.931 { 00:27:10.931 "params": { 00:27:10.931 "name": "Nvme$subsystem", 00:27:10.931 "trtype": "$TEST_TRANSPORT", 00:27:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.931 "adrfam": "ipv4", 00:27:10.931 "trsvcid": "$NVMF_PORT", 00:27:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.931 "hdgst": ${hdgst:-false}, 00:27:10.931 "ddgst": ${ddgst:-false} 00:27:10.931 }, 00:27:10.931 "method": "bdev_nvme_attach_controller" 00:27:10.931 } 00:27:10.931 EOF 00:27:10.931 )") 00:27:10.931 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.192 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:11.193 { 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme$subsystem", 00:27:11.193 "trtype": "$TEST_TRANSPORT", 00:27:11.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "$NVMF_PORT", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.193 "hdgst": ${hdgst:-false}, 00:27:11.193 "ddgst": ${ddgst:-false} 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 } 00:27:11.193 EOF 00:27:11.193 )") 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:11.193 [2024-10-08 17:43:02.931971] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:11.193 [2024-10-08 17:43:02.932030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440562 ] 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:11.193 { 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme$subsystem", 00:27:11.193 "trtype": "$TEST_TRANSPORT", 00:27:11.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "$NVMF_PORT", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.193 "hdgst": ${hdgst:-false}, 00:27:11.193 "ddgst": ${ddgst:-false} 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 } 00:27:11.193 EOF 00:27:11.193 )") 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:11.193 { 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme$subsystem", 00:27:11.193 "trtype": "$TEST_TRANSPORT", 00:27:11.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "$NVMF_PORT", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.193 "hdgst": ${hdgst:-false}, 00:27:11.193 "ddgst": ${ddgst:-false} 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 } 00:27:11.193 EOF 00:27:11.193 )") 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:11.193 { 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme$subsystem", 00:27:11.193 "trtype": "$TEST_TRANSPORT", 00:27:11.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "$NVMF_PORT", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.193 "hdgst": ${hdgst:-false}, 00:27:11.193 "ddgst": ${ddgst:-false} 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 } 00:27:11.193 EOF 00:27:11.193 )") 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:11.193 { 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme$subsystem", 00:27:11.193 "trtype": "$TEST_TRANSPORT", 00:27:11.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "$NVMF_PORT", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.193 "hdgst": ${hdgst:-false}, 00:27:11.193 "ddgst": ${ddgst:-false} 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 } 00:27:11.193 EOF 00:27:11.193 )") 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:27:11.193 17:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme1", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme2", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme3", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme4", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme5", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme6", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme7", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme8", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme9", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 },{ 00:27:11.193 "params": { 00:27:11.193 "name": "Nvme10", 00:27:11.193 "trtype": "tcp", 00:27:11.193 "traddr": "10.0.0.2", 00:27:11.193 "adrfam": "ipv4", 00:27:11.193 "trsvcid": "4420", 00:27:11.193 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:11.193 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:11.193 "hdgst": false, 00:27:11.193 "ddgst": false 00:27:11.193 }, 00:27:11.193 "method": "bdev_nvme_attach_controller" 00:27:11.193 }' 00:27:11.193 [2024-10-08 17:43:03.013725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.194 [2024-10-08 17:43:03.078291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.577 Running I/O for 1 seconds... 00:27:13.780 1863.00 IOPS, 116.44 MiB/s 00:27:13.780 Latency(us) 00:27:13.780 [2024-10-08T15:43:05.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme1n1 : 1.12 231.31 14.46 0.00 0.00 273830.41 19551.57 248162.99 00:27:13.780 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme2n1 : 1.12 229.46 14.34 0.00 0.00 271329.71 29491.20 249910.61 00:27:13.780 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme3n1 : 1.11 232.66 14.54 0.00 0.00 262077.53 3877.55 258648.75 00:27:13.780 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme4n1 : 1.12 228.92 14.31 0.00 0.00 262266.13 13325.65 248162.99 00:27:13.780 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme5n1 : 1.17 219.00 13.69 0.00 0.00 270492.16 18896.21 277872.64 00:27:13.780 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme6n1 : 1.13 225.89 14.12 0.00 0.00 256676.91 19660.80 253405.87 00:27:13.780 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme7n1 : 1.16 275.09 17.19 0.00 0.00 207540.57 32331.09 246415.36 00:27:13.780 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme8n1 : 1.17 274.51 17.16 0.00 0.00 204148.99 12834.13 265639.25 00:27:13.780 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme9n1 : 1.16 220.85 13.80 0.00 0.00 248928.21 17476.27 251658.24 00:27:13.780 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.780 Verification LBA range: start 0x0 length 0x400 00:27:13.780 Nvme10n1 : 1.17 272.70 17.04 0.00 0.00 198188.63 7591.25 253405.87 00:27:13.780 [2024-10-08T15:43:05.772Z] =================================================================================================================== 00:27:13.780 [2024-10-08T15:43:05.772Z] Total : 2410.38 150.65 0.00 0.00 242666.34 3877.55 277872.64 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.780 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.780 rmmod nvme_tcp 00:27:13.780 rmmod nvme_fabrics 00:27:13.780 rmmod nvme_keyring 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 439729 ']' 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 439729 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 439729 ']' 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 439729 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 439729 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 439729' 00:27:14.040 killing process with pid 439729 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 439729 00:27:14.040 17:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 439729 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.301 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.233 00:27:16.233 real 0m16.824s 00:27:16.233 user 0m33.400s 00:27:16.233 sys 0m7.001s 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:16.233 ************************************ 00:27:16.233 END TEST nvmf_shutdown_tc1 00:27:16.233 ************************************ 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:16.233 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:16.497 ************************************ 00:27:16.497 START TEST nvmf_shutdown_tc2 00:27:16.497 ************************************ 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.497 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:16.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:16.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:16.498 Found net devices under 0000:31:00.0: cvl_0_0 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:16.498 Found net devices under 0000:31:00.1: cvl_0_1 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.498 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.499 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:27:16.759 00:27:16.759 --- 10.0.0.2 ping statistics --- 00:27:16.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.759 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:16.759 00:27:16.759 --- 10.0.0.1 ping statistics --- 00:27:16.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.759 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.759 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=441848 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 441848 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 441848 ']' 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.760 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.760 [2024-10-08 17:43:08.698149] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:16.760 [2024-10-08 17:43:08.698210] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.020 [2024-10-08 17:43:08.785301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.020 [2024-10-08 17:43:08.845399] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.020 [2024-10-08 17:43:08.845444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.020 [2024-10-08 17:43:08.845449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.020 [2024-10-08 17:43:08.845454] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.020 [2024-10-08 17:43:08.845458] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.020 [2024-10-08 17:43:08.847035] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.020 [2024-10-08 17:43:08.847353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.021 [2024-10-08 17:43:08.847470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.021 [2024-10-08 17:43:08.847471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.591 [2024-10-08 17:43:09.554116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.591 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.852 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.852 Malloc1 00:27:17.852 [2024-10-08 17:43:09.652799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.852 Malloc2 00:27:17.852 Malloc3 00:27:17.852 Malloc4 00:27:17.852 Malloc5 00:27:17.852 Malloc6 00:27:18.112 Malloc7 00:27:18.112 Malloc8 00:27:18.112 Malloc9 00:27:18.112 Malloc10 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=442112 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 442112 /var/tmp/bdevperf.sock 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 442112 ']' 00:27:18.112 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:18.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.113 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.113 { 00:27:18.113 "params": { 00:27:18.113 "name": "Nvme$subsystem", 00:27:18.113 "trtype": "$TEST_TRANSPORT", 00:27:18.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.113 "adrfam": "ipv4", 00:27:18.113 "trsvcid": "$NVMF_PORT", 00:27:18.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.113 "hdgst": ${hdgst:-false}, 00:27:18.113 "ddgst": ${ddgst:-false} 00:27:18.113 }, 00:27:18.113 "method": "bdev_nvme_attach_controller" 00:27:18.113 } 00:27:18.113 EOF 00:27:18.113 )") 00:27:18.374 [2024-10-08 17:43:10.105340] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:18.374 [2024-10-08 17:43:10.105393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442112 ] 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.374 { 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme$subsystem", 00:27:18.374 "trtype": "$TEST_TRANSPORT", 00:27:18.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "$NVMF_PORT", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.374 "hdgst": ${hdgst:-false}, 00:27:18.374 "ddgst": ${ddgst:-false} 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 } 00:27:18.374 EOF 00:27:18.374 )") 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.374 { 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme$subsystem", 00:27:18.374 "trtype": "$TEST_TRANSPORT", 00:27:18.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "$NVMF_PORT", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.374 "hdgst": ${hdgst:-false}, 00:27:18.374 "ddgst": ${ddgst:-false} 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 } 00:27:18.374 EOF 00:27:18.374 )") 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:18.374 { 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme$subsystem", 00:27:18.374 "trtype": "$TEST_TRANSPORT", 00:27:18.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "$NVMF_PORT", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.374 "hdgst": ${hdgst:-false}, 00:27:18.374 "ddgst": ${ddgst:-false} 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 } 00:27:18.374 EOF 00:27:18.374 )") 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:27:18.374 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme1", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme2", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme3", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme4", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme5", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme6", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.374 "adrfam": "ipv4", 00:27:18.374 "trsvcid": "4420", 00:27:18.374 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:18.374 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:18.374 "hdgst": false, 00:27:18.374 "ddgst": false 00:27:18.374 }, 00:27:18.374 "method": "bdev_nvme_attach_controller" 00:27:18.374 },{ 00:27:18.374 "params": { 00:27:18.374 "name": "Nvme7", 00:27:18.374 "trtype": "tcp", 00:27:18.374 "traddr": "10.0.0.2", 00:27:18.375 "adrfam": "ipv4", 00:27:18.375 "trsvcid": "4420", 00:27:18.375 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:18.375 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:18.375 "hdgst": false, 00:27:18.375 "ddgst": false 00:27:18.375 }, 00:27:18.375 "method": "bdev_nvme_attach_controller" 00:27:18.375 },{ 00:27:18.375 "params": { 00:27:18.375 "name": "Nvme8", 00:27:18.375 "trtype": "tcp", 00:27:18.375 "traddr": "10.0.0.2", 00:27:18.375 "adrfam": "ipv4", 00:27:18.375 "trsvcid": "4420", 00:27:18.375 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:18.375 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:18.375 "hdgst": false, 00:27:18.375 "ddgst": false 00:27:18.375 }, 00:27:18.375 "method": "bdev_nvme_attach_controller" 00:27:18.375 },{ 00:27:18.375 "params": { 00:27:18.375 "name": "Nvme9", 00:27:18.375 "trtype": "tcp", 00:27:18.375 "traddr": "10.0.0.2", 00:27:18.375 "adrfam": "ipv4", 00:27:18.375 "trsvcid": "4420", 00:27:18.375 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:18.375 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:18.375 "hdgst": false, 00:27:18.375 "ddgst": false 00:27:18.375 }, 00:27:18.375 "method": "bdev_nvme_attach_controller" 00:27:18.375 },{ 00:27:18.375 "params": { 00:27:18.375 "name": "Nvme10", 00:27:18.375 "trtype": "tcp", 00:27:18.375 "traddr": "10.0.0.2", 00:27:18.375 "adrfam": "ipv4", 00:27:18.375 "trsvcid": "4420", 00:27:18.375 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:18.375 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:18.375 "hdgst": false, 00:27:18.375 "ddgst": false 00:27:18.375 }, 00:27:18.375 "method": "bdev_nvme_attach_controller" 00:27:18.375 }' 00:27:18.375 [2024-10-08 17:43:10.182393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.375 [2024-10-08 17:43:10.247056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.287 Running I/O for 10 seconds... 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.287 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.287 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.548 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.548 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:20.548 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:20.548 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 442112 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 442112 ']' 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 442112 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 442112 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 442112' 00:27:20.809 killing process with pid 442112 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 442112 00:27:20.809 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 442112 00:27:20.810 Received shutdown signal, test time was about 0.984331 seconds 00:27:20.810 00:27:20.810 Latency(us) 00:27:20.810 [2024-10-08T15:43:12.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.810 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme1n1 : 0.95 214.69 13.42 0.00 0.00 292283.33 5188.27 246415.36 00:27:20.810 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme2n1 : 0.97 263.44 16.47 0.00 0.00 235045.76 20643.84 225443.84 00:27:20.810 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme3n1 : 0.97 262.78 16.42 0.00 0.00 230860.80 18350.08 248162.99 00:27:20.810 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme4n1 : 0.98 261.81 16.36 0.00 0.00 227080.75 20316.16 244667.73 00:27:20.810 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme5n1 : 0.96 200.70 12.54 0.00 0.00 289402.60 20971.52 255153.49 00:27:20.810 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme6n1 : 0.97 264.45 16.53 0.00 0.00 214963.84 15510.19 248162.99 00:27:20.810 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme7n1 : 0.96 265.36 16.58 0.00 0.00 209297.71 15837.87 246415.36 00:27:20.810 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme8n1 : 0.98 260.31 16.27 0.00 0.00 209105.49 17476.27 255153.49 00:27:20.810 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme9n1 : 0.96 199.77 12.49 0.00 0.00 264719.93 18131.63 270882.13 00:27:20.810 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.810 Verification LBA range: start 0x0 length 0x400 00:27:20.810 Nvme10n1 : 0.95 201.49 12.59 0.00 0.00 255967.00 16711.68 249910.61 00:27:20.810 [2024-10-08T15:43:12.802Z] =================================================================================================================== 00:27:20.810 [2024-10-08T15:43:12.802Z] Total : 2394.79 149.67 0.00 0.00 239511.96 5188.27 270882.13 00:27:21.070 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 441848 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.012 rmmod nvme_tcp 00:27:22.012 rmmod nvme_fabrics 00:27:22.012 rmmod nvme_keyring 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 441848 ']' 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 441848 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 441848 ']' 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 441848 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.012 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441848 00:27:22.274 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:22.274 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:22.274 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441848' 00:27:22.274 killing process with pid 441848 00:27:22.274 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 441848 00:27:22.274 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 441848 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.538 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.450 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:24.450 00:27:24.450 real 0m8.118s 00:27:24.450 user 0m24.779s 00:27:24.450 sys 0m1.301s 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.451 ************************************ 00:27:24.451 END TEST nvmf_shutdown_tc2 00:27:24.451 ************************************ 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.451 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:24.712 ************************************ 00:27:24.712 START TEST nvmf_shutdown_tc3 00:27:24.712 ************************************ 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:24.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.712 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:24.713 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:24.713 Found net devices under 0000:31:00.0: cvl_0_0 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:24.713 Found net devices under 0000:31:00.1: cvl_0_1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.713 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:27:24.975 00:27:24.975 --- 10.0.0.2 ping statistics --- 00:27:24.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.975 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:27:24.975 00:27:24.975 --- 10.0.0.1 ping statistics --- 00:27:24.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.975 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=443465 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 443465 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 443465 ']' 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.975 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.975 [2024-10-08 17:43:16.910801] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:24.975 [2024-10-08 17:43:16.910867] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.236 [2024-10-08 17:43:17.000069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.236 [2024-10-08 17:43:17.059903] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.236 [2024-10-08 17:43:17.059939] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.236 [2024-10-08 17:43:17.059944] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.236 [2024-10-08 17:43:17.059949] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.236 [2024-10-08 17:43:17.059954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.236 [2024-10-08 17:43:17.061308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.236 [2024-10-08 17:43:17.061462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.236 [2024-10-08 17:43:17.061614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.236 [2024-10-08 17:43:17.061616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.807 [2024-10-08 17:43:17.754347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:25.807 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.068 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.068 Malloc1 00:27:26.068 [2024-10-08 17:43:17.853026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.068 Malloc2 00:27:26.068 Malloc3 00:27:26.068 Malloc4 00:27:26.068 Malloc5 00:27:26.068 Malloc6 00:27:26.068 Malloc7 00:27:26.329 Malloc8 00:27:26.329 Malloc9 00:27:26.329 Malloc10 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=443755 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 443755 /var/tmp/bdevperf.sock 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 443755 ']' 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 [2024-10-08 17:43:18.298411] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:26.329 [2024-10-08 17:43:18.298465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443755 ] 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.329 "name": "Nvme$subsystem", 00:27:26.329 "trtype": "$TEST_TRANSPORT", 00:27:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.329 "adrfam": "ipv4", 00:27:26.329 "trsvcid": "$NVMF_PORT", 00:27:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.329 "hdgst": ${hdgst:-false}, 00:27:26.329 "ddgst": ${ddgst:-false} 00:27:26.329 }, 00:27:26.329 "method": "bdev_nvme_attach_controller" 00:27:26.329 } 00:27:26.329 EOF 00:27:26.329 )") 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.329 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.329 { 00:27:26.329 "params": { 00:27:26.330 "name": "Nvme$subsystem", 00:27:26.330 "trtype": "$TEST_TRANSPORT", 00:27:26.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.330 "adrfam": "ipv4", 00:27:26.330 "trsvcid": "$NVMF_PORT", 00:27:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.330 "hdgst": ${hdgst:-false}, 00:27:26.330 "ddgst": ${ddgst:-false} 00:27:26.330 }, 00:27:26.330 "method": "bdev_nvme_attach_controller" 00:27:26.330 } 00:27:26.330 EOF 00:27:26.330 )") 00:27:26.330 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:26.589 { 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme$subsystem", 00:27:26.589 "trtype": "$TEST_TRANSPORT", 00:27:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "$NVMF_PORT", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.589 "hdgst": ${hdgst:-false}, 00:27:26.589 "ddgst": ${ddgst:-false} 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 } 00:27:26.589 EOF 00:27:26.589 )") 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:27:26.589 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme1", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme2", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme3", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme4", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme5", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme6", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme7", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme8", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme9", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 },{ 00:27:26.589 "params": { 00:27:26.589 "name": "Nvme10", 00:27:26.589 "trtype": "tcp", 00:27:26.589 "traddr": "10.0.0.2", 00:27:26.589 "adrfam": "ipv4", 00:27:26.589 "trsvcid": "4420", 00:27:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:26.589 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:26.589 "hdgst": false, 00:27:26.589 "ddgst": false 00:27:26.589 }, 00:27:26.589 "method": "bdev_nvme_attach_controller" 00:27:26.589 }' 00:27:26.589 [2024-10-08 17:43:18.378492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.589 [2024-10-08 17:43:18.444456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.496 Running I/O for 10 seconds... 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:28.496 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:28.756 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 443465 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 443465 ']' 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 443465 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 443465 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 443465' 00:27:29.023 killing process with pid 443465 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 443465 00:27:29.023 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 443465 00:27:29.023 [2024-10-08 17:43:20.986484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.023 [2024-10-08 17:43:20.986645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.986832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4010 is same with the state(6) to be set 00:27:29.024 [2024-10-08 17:43:20.988553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.024 [2024-10-08 17:43:20.988956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.024 [2024-10-08 17:43:20.988963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.988972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.988986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.988995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.025 [2024-10-08 17:43:20.989681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.025 [2024-10-08 17:43:20.989689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.989858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.989995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990008] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24854e0 was disconnected and fr[2024-10-08 17:43:20.990018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with eed. reset controller. 00:27:29.026 the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-10-08 17:43:20.990188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc49b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 the state(6) to be set 00:27:29.026 [2024-10-08 17:43:20.990196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.026 [2024-10-08 17:43:20.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.026 [2024-10-08 17:43:20.990352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.027 [2024-10-08 17:43:20.990936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.027 [2024-10-08 17:43:20.990946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.990953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.990963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.990970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.990985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.990992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.028 [2024-10-08 17:43:20.991265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.028 [2024-10-08 17:43:20.991313] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x248dd80 was disconnected and freed. reset controller. 00:27:29.028 [2024-10-08 17:43:20.991405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.028 [2024-10-08 17:43:20.991662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.991722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4ea0 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5370 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.992747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:29.029 [2024-10-08 17:43:20.992807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c7270 (9): Bad file descriptor 00:27:29.029 [2024-10-08 17:43:20.993559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.029 [2024-10-08 17:43:20.993627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.993870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5840 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.994311] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:29.030 [2024-10-08 17:43:20.994357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:27:29.030 [2024-10-08 17:43:20.994413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381370 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.994510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5c660 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.994597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d5f0 is same with the state(6) to be set 00:27:29.030 [2024-10-08 17:43:20.994696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.030 [2024-10-08 17:43:20.994736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.030 [2024-10-08 17:43:20.994746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:20.994753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:20.994760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60030 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994802] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:29.031 [2024-10-08 17:43:20.994804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.994918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.995809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-10-08 17:43:20.995832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c7270 with addr=10.0.0.2, port=4420 00:27:29.031 [2024-10-08 17:43:20.995841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c7270 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.996709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-10-08 17:43:20.996732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f56ae0 with addr=10.0.0.2, port=4420 00:27:29.031 [2024-10-08 17:43:20.996740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f56ae0 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:20.996751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c7270 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:20.996796] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:29.031 [2024-10-08 17:43:20.997034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:20.997055] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:29.031 [2024-10-08 17:43:20.997062] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:29.031 [2024-10-08 17:43:20.997072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:29.031 [2024-10-08 17:43:20.997141] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:29.031 [2024-10-08 17:43:20.997177] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:29.031 [2024-10-08 17:43:20.997332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.031 [2024-10-08 17:43:20.997347] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:29.031 [2024-10-08 17:43:20.997354] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:29.031 [2024-10-08 17:43:20.997363] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:29.031 [2024-10-08 17:43:20.997426] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:29.031 [2024-10-08 17:43:20.997540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.031 [2024-10-08 17:43:21.004380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e76610 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:21.004485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2381370 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.004502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5c660 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.004517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5d5f0 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.004544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.031 [2024-10-08 17:43:21.004599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.031 [2024-10-08 17:43:21.004606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cffd0 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:21.004630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f60030 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.004831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:29.031 [2024-10-08 17:43:21.005131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-10-08 17:43:21.005148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c7270 with addr=10.0.0.2, port=4420 00:27:29.031 [2024-10-08 17:43:21.005156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c7270 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:21.005221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c7270 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.005269] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:29.031 [2024-10-08 17:43:21.005277] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:29.031 [2024-10-08 17:43:21.005285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:29.031 [2024-10-08 17:43:21.005327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.031 [2024-10-08 17:43:21.006001] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:29.031 [2024-10-08 17:43:21.006252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-10-08 17:43:21.006267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f56ae0 with addr=10.0.0.2, port=4420 00:27:29.031 [2024-10-08 17:43:21.006278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f56ae0 is same with the state(6) to be set 00:27:29.031 [2024-10-08 17:43:21.006916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:27:29.031 [2024-10-08 17:43:21.007866] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:29.031 [2024-10-08 17:43:21.007877] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:29.031 [2024-10-08 17:43:21.007886] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:29.032 [2024-10-08 17:43:21.007936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.300 [2024-10-08 17:43:21.012383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.300 [2024-10-08 17:43:21.012444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5d10 is same with the state(6) to be set 00:27:29.301 [2024-10-08 17:43:21.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.012985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.012995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.301 [2024-10-08 17:43:21.013179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.301 [2024-10-08 17:43:21.013186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.302 [2024-10-08 17:43:21.013797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.013805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2364e50 is same with the state(6) to be set 00:27:29.302 [2024-10-08 17:43:21.013843] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2364e50 was disconnected and freed. reset controller. 00:27:29.302 [2024-10-08 17:43:21.015096] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:29.302 [2024-10-08 17:43:21.015114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cffd0 (9): Bad file descriptor 00:27:29.302 [2024-10-08 17:43:21.015128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e76610 (9): Bad file descriptor 00:27:29.302 [2024-10-08 17:43:21.015162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.302 [2024-10-08 17:43:21.015172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.015182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.302 [2024-10-08 17:43:21.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.302 [2024-10-08 17:43:21.015204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.302 [2024-10-08 17:43:21.015213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.303 [2024-10-08 17:43:21.015231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1fc0 is same with the state(6) to be set 00:27:29.303 [2024-10-08 17:43:21.015286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.303 [2024-10-08 17:43:21.015296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.303 [2024-10-08 17:43:21.015313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.303 [2024-10-08 17:43:21.015328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.303 [2024-10-08 17:43:21.015343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1ca0 is same with the state(6) to be set 00:27:29.303 [2024-10-08 17:43:21.015458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.015984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.015996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.016003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.016019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.016029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.016053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.303 [2024-10-08 17:43:21.016062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.303 [2024-10-08 17:43:21.022495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.022985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.022995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.023002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.023012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.023019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.023028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248cae0 is same with the state(6) to be set 00:27:29.304 [2024-10-08 17:43:21.024348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.304 [2024-10-08 17:43:21.024573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.304 [2024-10-08 17:43:21.024582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.024991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.024998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.305 [2024-10-08 17:43:21.025175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.305 [2024-10-08 17:43:21.025182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.025433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.025442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347ab0 is same with the state(6) to be set 00:27:29.306 [2024-10-08 17:43:21.026718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.026991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.026999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.306 [2024-10-08 17:43:21.027166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.306 [2024-10-08 17:43:21.027176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.027804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.027812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354f40 is same with the state(6) to be set 00:27:29.307 [2024-10-08 17:43:21.029079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.029094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.029106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.029114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.029124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.307 [2024-10-08 17:43:21.029141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.307 [2024-10-08 17:43:21.029149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.308 [2024-10-08 17:43:21.029860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.308 [2024-10-08 17:43:21.029869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.029984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.029994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.030172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.030180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2362350 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.031712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:29.309 [2024-10-08 17:43:21.031733] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.309 [2024-10-08 17:43:21.031743] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:29.309 [2024-10-08 17:43:21.031753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:29.309 [2024-10-08 17:43:21.032249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.309 [2024-10-08 17:43:21.032288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cffd0 with addr=10.0.0.2, port=4420 00:27:29.309 [2024-10-08 17:43:21.032301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cffd0 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.032356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1fc0 (9): Bad file descriptor 00:27:29.309 [2024-10-08 17:43:21.032381] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.309 [2024-10-08 17:43:21.032396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1ca0 (9): Bad file descriptor 00:27:29.309 [2024-10-08 17:43:21.032417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cffd0 (9): Bad file descriptor 00:27:29.309 [2024-10-08 17:43:21.032498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:29.309 [2024-10-08 17:43:21.032892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.309 [2024-10-08 17:43:21.032906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c7270 with addr=10.0.0.2, port=4420 00:27:29.309 [2024-10-08 17:43:21.032914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c7270 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.033217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.309 [2024-10-08 17:43:21.033255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f60030 with addr=10.0.0.2, port=4420 00:27:29.309 [2024-10-08 17:43:21.033267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60030 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.033607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.309 [2024-10-08 17:43:21.033619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5d5f0 with addr=10.0.0.2, port=4420 00:27:29.309 [2024-10-08 17:43:21.033626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d5f0 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.033944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.309 [2024-10-08 17:43:21.033954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5c660 with addr=10.0.0.2, port=4420 00:27:29.309 [2024-10-08 17:43:21.033962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5c660 is same with the state(6) to be set 00:27:29.309 [2024-10-08 17:43:21.035047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.309 [2024-10-08 17:43:21.035183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.309 [2024-10-08 17:43:21.035190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.310 [2024-10-08 17:43:21.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.310 [2024-10-08 17:43:21.035903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.035912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.035920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.035930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.035939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.035956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.035965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.035973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.035992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.311 [2024-10-08 17:43:21.036154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.311 [2024-10-08 17:43:21.036163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23638d0 is same with the state(6) to be set 00:27:29.311 [2024-10-08 17:43:21.037910] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:29.311 task offset: 26624 on job bdev=Nvme10n1 fails 00:27:29.311 00:27:29.311 Latency(us) 00:27:29.311 [2024-10-08T15:43:21.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme1n1 ended in about 0.99 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme1n1 : 0.99 129.28 8.08 64.64 0.00 326543.64 17585.49 251658.24 00:27:29.311 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme2n1 ended in about 0.96 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme2n1 : 0.96 200.01 12.50 66.67 0.00 232585.44 4942.51 251658.24 00:27:29.311 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme3n1 ended in about 0.99 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme3n1 : 0.99 193.46 12.09 64.49 0.00 235947.52 21080.75 232434.35 00:27:29.311 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme4n1 ended in about 0.99 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme4n1 : 0.99 193.00 12.06 64.33 0.00 231811.63 17039.36 239424.85 00:27:29.311 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme5n1 ended in about 1.00 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme5n1 : 1.00 128.36 8.02 64.18 0.00 303692.23 19988.48 279620.27 00:27:29.311 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme6n1 ended in about 1.00 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme6n1 : 1.00 191.39 11.96 63.80 0.00 224450.99 15291.73 246415.36 00:27:29.311 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme7n1 ended in about 0.98 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme7n1 : 0.98 195.73 12.23 65.24 0.00 214166.40 18240.85 249910.61 00:27:29.311 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme8n1 : 0.97 262.91 16.43 0.00 0.00 207594.88 21736.11 242920.11 00:27:29.311 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme9n1 : 0.97 198.33 12.40 0.00 0.00 268518.97 17039.36 272629.76 00:27:29.311 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.311 Job: Nvme10n1 ended in about 0.96 seconds with error 00:27:29.311 Verification LBA range: start 0x0 length 0x400 00:27:29.311 Nvme10n1 : 0.96 200.28 12.52 66.76 0.00 194372.91 3768.32 242920.11 00:27:29.311 [2024-10-08T15:43:21.303Z] =================================================================================================================== 00:27:29.311 [2024-10-08T15:43:21.303Z] Total : 1892.74 118.30 520.11 0.00 239459.02 3768.32 279620.27 00:27:29.311 [2024-10-08 17:43:21.063762] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:29.311 [2024-10-08 17:43:21.063810] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:29.311 [2024-10-08 17:43:21.064199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.311 [2024-10-08 17:43:21.064218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2381370 with addr=10.0.0.2, port=4420 00:27:29.311 [2024-10-08 17:43:21.064236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381370 is same with the state(6) to be set 00:27:29.311 [2024-10-08 17:43:21.064251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c7270 (9): Bad file descriptor 00:27:29.311 [2024-10-08 17:43:21.064263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f60030 (9): Bad file descriptor 00:27:29.311 [2024-10-08 17:43:21.064273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5d5f0 (9): Bad file descriptor 00:27:29.311 [2024-10-08 17:43:21.064282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5c660 (9): Bad file descriptor 00:27:29.311 [2024-10-08 17:43:21.064291] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:29.311 [2024-10-08 17:43:21.064298] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:29.311 [2024-10-08 17:43:21.064308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:29.311 [2024-10-08 17:43:21.064424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.311 [2024-10-08 17:43:21.064715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.311 [2024-10-08 17:43:21.064728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f56ae0 with addr=10.0.0.2, port=4420 00:27:29.311 [2024-10-08 17:43:21.064736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f56ae0 is same with the state(6) to be set 00:27:29.311 [2024-10-08 17:43:21.065089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.311 [2024-10-08 17:43:21.065100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e76610 with addr=10.0.0.2, port=4420 00:27:29.311 [2024-10-08 17:43:21.065107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e76610 is same with the state(6) to be set 00:27:29.311 [2024-10-08 17:43:21.065117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2381370 (9): Bad file descriptor 00:27:29.311 [2024-10-08 17:43:21.065125] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:29.311 [2024-10-08 17:43:21.065132] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:29.311 [2024-10-08 17:43:21.065139] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:29.311 [2024-10-08 17:43:21.065150] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.311 [2024-10-08 17:43:21.065156] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.311 [2024-10-08 17:43:21.065163] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.311 [2024-10-08 17:43:21.065174] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:29.311 [2024-10-08 17:43:21.065181] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:29.311 [2024-10-08 17:43:21.065187] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:29.311 [2024-10-08 17:43:21.065198] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:29.311 [2024-10-08 17:43:21.065205] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:29.311 [2024-10-08 17:43:21.065211] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:29.311 [2024-10-08 17:43:21.065248] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.312 [2024-10-08 17:43:21.065263] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.312 [2024-10-08 17:43:21.065274] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.312 [2024-10-08 17:43:21.065284] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.312 [2024-10-08 17:43:21.065303] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.312 [2024-10-08 17:43:21.065878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.065891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.065897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.065904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.065923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:27:29.312 [2024-10-08 17:43:21.065934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e76610 (9): Bad file descriptor 00:27:29.312 [2024-10-08 17:43:21.065942] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.065948] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.065956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:29.312 [2024-10-08 17:43:21.066002] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:29.312 [2024-10-08 17:43:21.066014] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:29.312 [2024-10-08 17:43:21.066023] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:29.312 [2024-10-08 17:43:21.066031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.066058] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.066065] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.066072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:29.312 [2024-10-08 17:43:21.066081] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.066087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.066094] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:29.312 [2024-10-08 17:43:21.066128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.066135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.066490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.312 [2024-10-08 17:43:21.066503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b1fc0 with addr=10.0.0.2, port=4420 00:27:29.312 [2024-10-08 17:43:21.066511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1fc0 is same with the state(6) to be set 00:27:29.312 [2024-10-08 17:43:21.066681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.312 [2024-10-08 17:43:21.066691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b1ca0 with addr=10.0.0.2, port=4420 00:27:29.312 [2024-10-08 17:43:21.066698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1ca0 is same with the state(6) to be set 00:27:29.312 [2024-10-08 17:43:21.067035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.312 [2024-10-08 17:43:21.067046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cffd0 with addr=10.0.0.2, port=4420 00:27:29.312 [2024-10-08 17:43:21.067053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cffd0 is same with the state(6) to be set 00:27:29.312 [2024-10-08 17:43:21.067082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1fc0 (9): Bad file descriptor 00:27:29.312 [2024-10-08 17:43:21.067092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1ca0 (9): Bad file descriptor 00:27:29.312 [2024-10-08 17:43:21.067101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cffd0 (9): Bad file descriptor 00:27:29.312 [2024-10-08 17:43:21.067127] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.067134] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.067140] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:29.312 [2024-10-08 17:43:21.067150] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.067156] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.067163] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:29.312 [2024-10-08 17:43:21.067172] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:29.312 [2024-10-08 17:43:21.067178] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:29.312 [2024-10-08 17:43:21.067185] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:29.312 [2024-10-08 17:43:21.067212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.067219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 [2024-10-08 17:43:21.067225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.312 17:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 443755 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 443755 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 443755 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.694 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.695 rmmod nvme_tcp 00:27:30.695 rmmod nvme_fabrics 00:27:30.695 rmmod nvme_keyring 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 443465 ']' 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 443465 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 443465 ']' 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 443465 00:27:30.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (443465) - No such process 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 443465 is not found' 00:27:30.695 Process with pid 443465 is not found 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.695 17:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.605 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:32.605 00:27:32.605 real 0m7.999s 00:27:32.605 user 0m19.907s 00:27:32.605 sys 0m1.278s 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:32.606 ************************************ 00:27:32.606 END TEST nvmf_shutdown_tc3 00:27:32.606 ************************************ 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:32.606 ************************************ 00:27:32.606 START TEST nvmf_shutdown_tc4 00:27:32.606 ************************************ 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:32.606 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:32.606 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:32.606 Found net devices under 0000:31:00.0: cvl_0_0 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:32.606 Found net devices under 0000:31:00.1: cvl_0_1 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:32.606 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.607 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.867 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:27:32.867 00:27:32.867 --- 10.0.0.2 ping statistics --- 00:27:32.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.867 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:33.127 00:27:33.127 --- 10.0.0.1 ping statistics --- 00:27:33.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.127 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=445215 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 445215 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 445215 ']' 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.127 17:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.127 [2024-10-08 17:43:24.987917] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:33.127 [2024-10-08 17:43:24.987969] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.127 [2024-10-08 17:43:25.073288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.387 [2024-10-08 17:43:25.142591] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.387 [2024-10-08 17:43:25.142633] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.387 [2024-10-08 17:43:25.142638] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.387 [2024-10-08 17:43:25.142643] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.387 [2024-10-08 17:43:25.142647] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.387 [2024-10-08 17:43:25.144028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.387 [2024-10-08 17:43:25.144336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.387 [2024-10-08 17:43:25.144477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.387 [2024-10-08 17:43:25.144478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.958 [2024-10-08 17:43:25.838286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.958 17:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.958 Malloc1 00:27:33.958 [2024-10-08 17:43:25.936971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.218 Malloc2 00:27:34.218 Malloc3 00:27:34.218 Malloc4 00:27:34.218 Malloc5 00:27:34.218 Malloc6 00:27:34.218 Malloc7 00:27:34.218 Malloc8 00:27:34.478 Malloc9 00:27:34.478 Malloc10 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=445596 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:34.478 17:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:34.478 [2024-10-08 17:43:26.408689] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 445215 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 445215 ']' 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 445215 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445215 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445215' 00:27:39.765 killing process with pid 445215 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 445215 00:27:39.765 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 445215 00:27:39.765 [2024-10-08 17:43:31.413511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d350 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.413667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2b10 is same with the state(6) to be set 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 starting I/O failed: -6 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 [2024-10-08 17:43:31.414008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3000 is same with the state(6) to be set 00:27:39.765 [2024-10-08 17:43:31.414031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3000 is same with the state(6) to be set 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 starting I/O failed: -6 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 starting I/O failed: -6 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 starting I/O failed: -6 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.765 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 [2024-10-08 17:43:31.414336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203cfd0 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.414358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203cfd0 is same with the state(6) to be set 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 [2024-10-08 17:43:31.414364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203cfd0 is same with the state(6) to be set 00:27:39.766 starting I/O failed: -6 00:27:39.766 [2024-10-08 17:43:31.414369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203cfd0 is same with the state(6) to be set 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 [2024-10-08 17:43:31.414466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 starting I/O failed: -6 00:27:39.766 [2024-10-08 17:43:31.416264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.766 [2024-10-08 17:43:31.416560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 [2024-10-08 17:43:31.416665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1d90 is same with the state(6) to be set 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 starting I/O failed: -6 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 [2024-10-08 17:43:31.416898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2280 is same with the state(6) to be set 00:27:39.766 starting I/O failed: -6 00:27:39.766 [2024-10-08 17:43:31.416919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2280 is same with the state(6) to be set 00:27:39.766 Write completed with error (sct=0, sc=8) 00:27:39.766 [2024-10-08 17:43:31.416925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2280 is same with the state(6) to be set 00:27:39.766 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.416930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2280 is same with the state(6) to be set 00:27:39.767 [2024-10-08 17:43:31.416935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2280 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.417241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2770 is same with Write completed with error (sct=0, sc=8) 00:27:39.767 the state(6) to be set 00:27:39.767 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.417258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2770 is same with the state(6) to be set 00:27:39.767 [2024-10-08 17:43:31.417264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2770 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 [2024-10-08 17:43:31.417269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2770 is same with the state(6) to be set 00:27:39.767 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.417274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2770 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.417484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 [2024-10-08 17:43:31.417504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with starting I/O failed: -6 00:27:39.767 the state(6) to be set 00:27:39.767 [2024-10-08 17:43:31.417510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 [2024-10-08 17:43:31.417515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with the state(6) to be set 00:27:39.767 [2024-10-08 17:43:31.417520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with starting I/O failed: -6 00:27:39.767 the state(6) to be set 00:27:39.767 [2024-10-08 17:43:31.417526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa18a0 is same with the state(6) to be set 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 [2024-10-08 17:43:31.418070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.767 NVMe io qpair process completion error 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 [2024-10-08 17:43:31.419216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 Write completed with error (sct=0, sc=8) 00:27:39.767 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 [2024-10-08 17:43:31.420100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 [2024-10-08 17:43:31.420993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.768 Write completed with error (sct=0, sc=8) 00:27:39.768 starting I/O failed: -6 00:27:39.769 [2024-10-08 17:43:31.422588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.769 NVMe io qpair process completion error 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 [2024-10-08 17:43:31.423848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 [2024-10-08 17:43:31.424668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 [2024-10-08 17:43:31.425584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.769 starting I/O failed: -6 00:27:39.769 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 [2024-10-08 17:43:31.427208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.770 NVMe io qpair process completion error 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 [2024-10-08 17:43:31.428585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.770 starting I/O failed: -6 00:27:39.770 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 [2024-10-08 17:43:31.429399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 [2024-10-08 17:43:31.430331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 [2024-10-08 17:43:31.432602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.771 NVMe io qpair process completion error 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 starting I/O failed: -6 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.771 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 [2024-10-08 17:43:31.433727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.772 starting I/O failed: -6 00:27:39.772 starting I/O failed: -6 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 [2024-10-08 17:43:31.434656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 [2024-10-08 17:43:31.435560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.772 Write completed with error (sct=0, sc=8) 00:27:39.772 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 [2024-10-08 17:43:31.437001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.773 NVMe io qpair process completion error 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 [2024-10-08 17:43:31.437971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 [2024-10-08 17:43:31.438777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.773 Write completed with error (sct=0, sc=8) 00:27:39.773 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 [2024-10-08 17:43:31.439716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 [2024-10-08 17:43:31.442057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.774 NVMe io qpair process completion error 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 starting I/O failed: -6 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.774 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 [2024-10-08 17:43:31.443235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 [2024-10-08 17:43:31.444124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 [2024-10-08 17:43:31.445031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.775 Write completed with error (sct=0, sc=8) 00:27:39.775 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 [2024-10-08 17:43:31.447465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.776 NVMe io qpair process completion error 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 [2024-10-08 17:43:31.448680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.776 starting I/O failed: -6 00:27:39.776 starting I/O failed: -6 00:27:39.776 starting I/O failed: -6 00:27:39.776 starting I/O failed: -6 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 [2024-10-08 17:43:31.449728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 Write completed with error (sct=0, sc=8) 00:27:39.776 starting I/O failed: -6 00:27:39.776 [2024-10-08 17:43:31.450666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 [2024-10-08 17:43:31.452518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.777 NVMe io qpair process completion error 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 [2024-10-08 17:43:31.453365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 [2024-10-08 17:43:31.454170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.777 Write completed with error (sct=0, sc=8) 00:27:39.777 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 [2024-10-08 17:43:31.455115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 [2024-10-08 17:43:31.457729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.778 NVMe io qpair process completion error 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 starting I/O failed: -6 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.778 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 [2024-10-08 17:43:31.458825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 [2024-10-08 17:43:31.459656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 [2024-10-08 17:43:31.460578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.779 Write completed with error (sct=0, sc=8) 00:27:39.779 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 Write completed with error (sct=0, sc=8) 00:27:39.780 starting I/O failed: -6 00:27:39.780 [2024-10-08 17:43:31.462239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.780 NVMe io qpair process completion error 00:27:39.780 Initializing NVMe Controllers 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:39.780 Controller IO queue size 128, less than required. 00:27:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:39.780 Initialization complete. Launching workers. 00:27:39.780 ======================================================== 00:27:39.780 Latency(us) 00:27:39.780 Device Information : IOPS MiB/s Average min max 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1888.44 81.14 67795.60 838.77 125412.30 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1893.42 81.36 67647.85 795.35 133808.24 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1877.03 80.65 67573.08 631.46 117145.63 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1835.75 78.88 69111.72 817.68 126452.71 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1865.00 80.14 68049.61 698.16 124270.42 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1914.37 82.26 66318.18 661.05 116845.82 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1900.47 81.66 66831.57 749.59 123277.86 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1899.44 81.62 66884.15 861.01 124195.20 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1904.21 81.82 66746.55 906.16 117489.18 00:27:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1906.07 81.90 66716.83 538.79 126009.34 00:27:39.780 ======================================================== 00:27:39.780 Total : 18884.21 811.43 67358.56 538.79 133808.24 00:27:39.780 00:27:39.780 [2024-10-08 17:43:31.464950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb810 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea8e0 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec760 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceafd0 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1b0 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea5b0 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb40 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec430 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea280 is same with the state(6) to be set 00:27:39.780 [2024-10-08 17:43:31.465237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb4e0 is same with the state(6) to be set 00:27:39.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:39.780 17:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 445596 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 445596 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 445596 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.721 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.721 rmmod nvme_tcp 00:27:40.981 rmmod nvme_fabrics 00:27:40.981 rmmod nvme_keyring 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 445215 ']' 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 445215 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 445215 ']' 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 445215 00:27:40.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (445215) - No such process 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 445215 is not found' 00:27:40.981 Process with pid 445215 is not found 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.981 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.893 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.893 00:27:42.893 real 0m10.317s 00:27:42.893 user 0m27.856s 00:27:42.893 sys 0m3.998s 00:27:42.893 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:42.893 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:42.893 ************************************ 00:27:42.893 END TEST nvmf_shutdown_tc4 00:27:42.893 ************************************ 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:43.154 00:27:43.154 real 0m43.827s 00:27:43.154 user 1m46.194s 00:27:43.154 sys 0m13.929s 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.154 ************************************ 00:27:43.154 END TEST nvmf_shutdown 00:27:43.154 ************************************ 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:43.154 00:27:43.154 real 12m59.222s 00:27:43.154 user 27m29.189s 00:27:43.154 sys 3m47.004s 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.154 17:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:43.154 ************************************ 00:27:43.154 END TEST nvmf_target_extra 00:27:43.154 ************************************ 00:27:43.154 17:43:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:43.154 17:43:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:43.154 17:43:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.154 17:43:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.154 ************************************ 00:27:43.155 START TEST nvmf_host 00:27:43.155 ************************************ 00:27:43.155 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:43.155 * Looking for test storage... 00:27:43.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:43.155 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:43.155 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:43.155 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:43.416 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:43.416 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.416 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:43.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.417 --rc genhtml_branch_coverage=1 00:27:43.417 --rc genhtml_function_coverage=1 00:27:43.417 --rc genhtml_legend=1 00:27:43.417 --rc geninfo_all_blocks=1 00:27:43.417 --rc geninfo_unexecuted_blocks=1 00:27:43.417 00:27:43.417 ' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:43.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.417 --rc genhtml_branch_coverage=1 00:27:43.417 --rc genhtml_function_coverage=1 00:27:43.417 --rc genhtml_legend=1 00:27:43.417 --rc geninfo_all_blocks=1 00:27:43.417 --rc geninfo_unexecuted_blocks=1 00:27:43.417 00:27:43.417 ' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:43.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.417 --rc genhtml_branch_coverage=1 00:27:43.417 --rc genhtml_function_coverage=1 00:27:43.417 --rc genhtml_legend=1 00:27:43.417 --rc geninfo_all_blocks=1 00:27:43.417 --rc geninfo_unexecuted_blocks=1 00:27:43.417 00:27:43.417 ' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:43.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.417 --rc genhtml_branch_coverage=1 00:27:43.417 --rc genhtml_function_coverage=1 00:27:43.417 --rc genhtml_legend=1 00:27:43.417 --rc geninfo_all_blocks=1 00:27:43.417 --rc geninfo_unexecuted_blocks=1 00:27:43.417 00:27:43.417 ' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.417 ************************************ 00:27:43.417 START TEST nvmf_multicontroller 00:27:43.417 ************************************ 00:27:43.417 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:43.417 * Looking for test storage... 00:27:43.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.678 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:43.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.679 --rc genhtml_branch_coverage=1 00:27:43.679 --rc genhtml_function_coverage=1 00:27:43.679 --rc genhtml_legend=1 00:27:43.679 --rc geninfo_all_blocks=1 00:27:43.679 --rc geninfo_unexecuted_blocks=1 00:27:43.679 00:27:43.679 ' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:43.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.679 --rc genhtml_branch_coverage=1 00:27:43.679 --rc genhtml_function_coverage=1 00:27:43.679 --rc genhtml_legend=1 00:27:43.679 --rc geninfo_all_blocks=1 00:27:43.679 --rc geninfo_unexecuted_blocks=1 00:27:43.679 00:27:43.679 ' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:43.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.679 --rc genhtml_branch_coverage=1 00:27:43.679 --rc genhtml_function_coverage=1 00:27:43.679 --rc genhtml_legend=1 00:27:43.679 --rc geninfo_all_blocks=1 00:27:43.679 --rc geninfo_unexecuted_blocks=1 00:27:43.679 00:27:43.679 ' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:43.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.679 --rc genhtml_branch_coverage=1 00:27:43.679 --rc genhtml_function_coverage=1 00:27:43.679 --rc genhtml_legend=1 00:27:43.679 --rc geninfo_all_blocks=1 00:27:43.679 --rc geninfo_unexecuted_blocks=1 00:27:43.679 00:27:43.679 ' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.679 17:43:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:51.811 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:51.811 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:51.811 Found net devices under 0000:31:00.0: cvl_0_0 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:51.811 Found net devices under 0000:31:00.1: cvl_0_1 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.811 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.812 17:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:27:51.812 00:27:51.812 --- 10.0.0.2 ping statistics --- 00:27:51.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.812 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:27:51.812 00:27:51.812 --- 10.0.0.1 ping statistics --- 00:27:51.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.812 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=451075 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 451075 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 451075 ']' 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.812 17:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:51.812 [2024-10-08 17:43:43.253830] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:51.812 [2024-10-08 17:43:43.253896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.812 [2024-10-08 17:43:43.346622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:51.812 [2024-10-08 17:43:43.440590] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.812 [2024-10-08 17:43:43.440653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.812 [2024-10-08 17:43:43.440667] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.812 [2024-10-08 17:43:43.440674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.812 [2024-10-08 17:43:43.440681] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.812 [2024-10-08 17:43:43.442289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.812 [2024-10-08 17:43:43.442447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.812 [2024-10-08 17:43:43.442448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 [2024-10-08 17:43:44.136766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 Malloc0 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 [2024-10-08 17:43:44.211557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 [2024-10-08 17:43:44.223456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 Malloc1 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=451423 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 451423 /var/tmp/bdevperf.sock 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 451423 ']' 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.383 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.384 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.384 17:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.325 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.325 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:53.325 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:53.325 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.325 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 NVMe0n1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.596 1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 request: 00:27:53.596 { 00:27:53.596 "name": "NVMe0", 00:27:53.596 "trtype": "tcp", 00:27:53.596 "traddr": "10.0.0.2", 00:27:53.596 "adrfam": "ipv4", 00:27:53.596 "trsvcid": "4420", 00:27:53.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.596 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:53.596 "hostaddr": "10.0.0.1", 00:27:53.596 "prchk_reftag": false, 00:27:53.596 "prchk_guard": false, 00:27:53.596 "hdgst": false, 00:27:53.596 "ddgst": false, 00:27:53.596 "allow_unrecognized_csi": false, 00:27:53.596 "method": "bdev_nvme_attach_controller", 00:27:53.596 "req_id": 1 00:27:53.596 } 00:27:53.596 Got JSON-RPC error response 00:27:53.596 response: 00:27:53.596 { 00:27:53.596 "code": -114, 00:27:53.596 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.596 } 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 request: 00:27:53.596 { 00:27:53.596 "name": "NVMe0", 00:27:53.596 "trtype": "tcp", 00:27:53.596 "traddr": "10.0.0.2", 00:27:53.596 "adrfam": "ipv4", 00:27:53.596 "trsvcid": "4420", 00:27:53.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.596 "hostaddr": "10.0.0.1", 00:27:53.596 "prchk_reftag": false, 00:27:53.596 "prchk_guard": false, 00:27:53.596 "hdgst": false, 00:27:53.596 "ddgst": false, 00:27:53.596 "allow_unrecognized_csi": false, 00:27:53.596 "method": "bdev_nvme_attach_controller", 00:27:53.596 "req_id": 1 00:27:53.596 } 00:27:53.596 Got JSON-RPC error response 00:27:53.596 response: 00:27:53.596 { 00:27:53.596 "code": -114, 00:27:53.596 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.596 } 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 request: 00:27:53.596 { 00:27:53.596 "name": "NVMe0", 00:27:53.596 "trtype": "tcp", 00:27:53.596 "traddr": "10.0.0.2", 00:27:53.596 "adrfam": "ipv4", 00:27:53.596 "trsvcid": "4420", 00:27:53.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.596 "hostaddr": "10.0.0.1", 00:27:53.596 "prchk_reftag": false, 00:27:53.596 "prchk_guard": false, 00:27:53.596 "hdgst": false, 00:27:53.596 "ddgst": false, 00:27:53.596 "multipath": "disable", 00:27:53.596 "allow_unrecognized_csi": false, 00:27:53.596 "method": "bdev_nvme_attach_controller", 00:27:53.596 "req_id": 1 00:27:53.596 } 00:27:53.596 Got JSON-RPC error response 00:27:53.596 response: 00:27:53.596 { 00:27:53.596 "code": -114, 00:27:53.596 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:53.596 } 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.596 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.597 request: 00:27:53.597 { 00:27:53.597 "name": "NVMe0", 00:27:53.597 "trtype": "tcp", 00:27:53.597 "traddr": "10.0.0.2", 00:27:53.597 "adrfam": "ipv4", 00:27:53.597 "trsvcid": "4420", 00:27:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.597 "hostaddr": "10.0.0.1", 00:27:53.597 "prchk_reftag": false, 00:27:53.597 "prchk_guard": false, 00:27:53.597 "hdgst": false, 00:27:53.597 "ddgst": false, 00:27:53.597 "multipath": "failover", 00:27:53.597 "allow_unrecognized_csi": false, 00:27:53.597 "method": "bdev_nvme_attach_controller", 00:27:53.597 "req_id": 1 00:27:53.597 } 00:27:53.597 Got JSON-RPC error response 00:27:53.597 response: 00:27:53.597 { 00:27:53.597 "code": -114, 00:27:53.597 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.597 } 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.597 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.856 NVMe0n1 00:27:53.856 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.856 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.856 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.856 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.856 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.857 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:53.857 17:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.237 { 00:27:55.237 "results": [ 00:27:55.237 { 00:27:55.237 "job": "NVMe0n1", 00:27:55.237 "core_mask": "0x1", 00:27:55.237 "workload": "write", 00:27:55.238 "status": "finished", 00:27:55.238 "queue_depth": 128, 00:27:55.238 "io_size": 4096, 00:27:55.238 "runtime": 1.0066, 00:27:55.238 "iops": 29048.281343135306, 00:27:55.238 "mibps": 113.46984899662229, 00:27:55.238 "io_failed": 0, 00:27:55.238 "io_timeout": 0, 00:27:55.238 "avg_latency_us": 4395.664751481988, 00:27:55.238 "min_latency_us": 2075.306666666667, 00:27:55.238 "max_latency_us": 13544.106666666667 00:27:55.238 } 00:27:55.238 ], 00:27:55.238 "core_count": 1 00:27:55.238 } 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 451423 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 451423 ']' 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 451423 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451423 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451423' 00:27:55.238 killing process with pid 451423 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 451423 00:27:55.238 17:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 451423 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:55.238 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.238 [2024-10-08 17:43:44.353774] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:27:55.238 [2024-10-08 17:43:44.353850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451423 ] 00:27:55.238 [2024-10-08 17:43:44.436346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.238 [2024-10-08 17:43:44.532310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.238 [2024-10-08 17:43:45.717740] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name aa19ea81-53e0-4829-af2f-e2c3f0ca736d already exists 00:27:55.238 [2024-10-08 17:43:45.717772] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:aa19ea81-53e0-4829-af2f-e2c3f0ca736d alias for bdev NVMe1n1 00:27:55.238 [2024-10-08 17:43:45.717780] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:55.238 Running I/O for 1 seconds... 00:27:55.238 29049.00 IOPS, 113.47 MiB/s 00:27:55.238 Latency(us) 00:27:55.238 [2024-10-08T15:43:47.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.238 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:55.238 NVMe0n1 : 1.01 29048.28 113.47 0.00 0.00 4395.66 2075.31 13544.11 00:27:55.238 [2024-10-08T15:43:47.230Z] =================================================================================================================== 00:27:55.238 [2024-10-08T15:43:47.230Z] Total : 29048.28 113.47 0.00 0.00 4395.66 2075.31 13544.11 00:27:55.238 Received shutdown signal, test time was about 1.000000 seconds 00:27:55.238 00:27:55.238 Latency(us) 00:27:55.238 [2024-10-08T15:43:47.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.238 [2024-10-08T15:43:47.230Z] =================================================================================================================== 00:27:55.238 [2024-10-08T15:43:47.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.238 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.238 rmmod nvme_tcp 00:27:55.238 rmmod nvme_fabrics 00:27:55.238 rmmod nvme_keyring 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 451075 ']' 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 451075 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 451075 ']' 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 451075 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.238 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451075 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451075' 00:27:55.498 killing process with pid 451075 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 451075 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 451075 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.498 17:43:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.038 00:27:58.038 real 0m14.181s 00:27:58.038 user 0m17.181s 00:27:58.038 sys 0m6.643s 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.038 ************************************ 00:27:58.038 END TEST nvmf_multicontroller 00:27:58.038 ************************************ 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.038 ************************************ 00:27:58.038 START TEST nvmf_aer 00:27:58.038 ************************************ 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.038 * Looking for test storage... 00:27:58.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.038 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:58.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.039 --rc genhtml_branch_coverage=1 00:27:58.039 --rc genhtml_function_coverage=1 00:27:58.039 --rc genhtml_legend=1 00:27:58.039 --rc geninfo_all_blocks=1 00:27:58.039 --rc geninfo_unexecuted_blocks=1 00:27:58.039 00:27:58.039 ' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:58.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.039 --rc genhtml_branch_coverage=1 00:27:58.039 --rc genhtml_function_coverage=1 00:27:58.039 --rc genhtml_legend=1 00:27:58.039 --rc geninfo_all_blocks=1 00:27:58.039 --rc geninfo_unexecuted_blocks=1 00:27:58.039 00:27:58.039 ' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:58.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.039 --rc genhtml_branch_coverage=1 00:27:58.039 --rc genhtml_function_coverage=1 00:27:58.039 --rc genhtml_legend=1 00:27:58.039 --rc geninfo_all_blocks=1 00:27:58.039 --rc geninfo_unexecuted_blocks=1 00:27:58.039 00:27:58.039 ' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:58.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.039 --rc genhtml_branch_coverage=1 00:27:58.039 --rc genhtml_function_coverage=1 00:27:58.039 --rc genhtml_legend=1 00:27:58.039 --rc geninfo_all_blocks=1 00:27:58.039 --rc geninfo_unexecuted_blocks=1 00:27:58.039 00:27:58.039 ' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.039 17:43:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.174 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:06.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:06.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:06.175 Found net devices under 0000:31:00.0: cvl_0_0 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:06.175 Found net devices under 0000:31:00.1: cvl_0_1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:28:06.175 00:28:06.175 --- 10.0.0.2 ping statistics --- 00:28:06.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.175 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:06.175 00:28:06.175 --- 10.0.0.1 ping statistics --- 00:28:06.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.175 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=456171 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 456171 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 456171 ']' 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.175 17:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.175 [2024-10-08 17:43:57.514309] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:06.175 [2024-10-08 17:43:57.514379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.175 [2024-10-08 17:43:57.603742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.175 [2024-10-08 17:43:57.699082] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.175 [2024-10-08 17:43:57.699139] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.175 [2024-10-08 17:43:57.699147] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.175 [2024-10-08 17:43:57.699159] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.175 [2024-10-08 17:43:57.699166] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.175 [2024-10-08 17:43:57.701371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.175 [2024-10-08 17:43:57.701533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.175 [2024-10-08 17:43:57.701662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.175 [2024-10-08 17:43:57.701663] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.436 [2024-10-08 17:43:58.391158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.436 Malloc0 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.436 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.696 [2024-10-08 17:43:58.456929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.696 [ 00:28:06.696 { 00:28:06.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.696 "subtype": "Discovery", 00:28:06.696 "listen_addresses": [], 00:28:06.696 "allow_any_host": true, 00:28:06.696 "hosts": [] 00:28:06.696 }, 00:28:06.696 { 00:28:06.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.696 "subtype": "NVMe", 00:28:06.696 "listen_addresses": [ 00:28:06.696 { 00:28:06.696 "trtype": "TCP", 00:28:06.696 "adrfam": "IPv4", 00:28:06.696 "traddr": "10.0.0.2", 00:28:06.696 "trsvcid": "4420" 00:28:06.696 } 00:28:06.696 ], 00:28:06.696 "allow_any_host": true, 00:28:06.696 "hosts": [], 00:28:06.696 "serial_number": "SPDK00000000000001", 00:28:06.696 "model_number": "SPDK bdev Controller", 00:28:06.696 "max_namespaces": 2, 00:28:06.696 "min_cntlid": 1, 00:28:06.696 "max_cntlid": 65519, 00:28:06.696 "namespaces": [ 00:28:06.696 { 00:28:06.696 "nsid": 1, 00:28:06.696 "bdev_name": "Malloc0", 00:28:06.696 "name": "Malloc0", 00:28:06.696 "nguid": "50F248BCDEB4443285DD32ED94BAC550", 00:28:06.696 "uuid": "50f248bc-deb4-4432-85dd-32ed94bac550" 00:28:06.696 } 00:28:06.696 ] 00:28:06.696 } 00:28:06.696 ] 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=456518 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:06.696 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 Malloc1 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 [ 00:28:06.957 { 00:28:06.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.957 "subtype": "Discovery", 00:28:06.957 "listen_addresses": [], 00:28:06.957 "allow_any_host": true, 00:28:06.957 "hosts": [] 00:28:06.957 }, 00:28:06.957 { 00:28:06.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.957 "subtype": "NVMe", 00:28:06.957 "listen_addresses": [ 00:28:06.957 { 00:28:06.957 "trtype": "TCP", 00:28:06.957 "adrfam": "IPv4", 00:28:06.957 "traddr": "10.0.0.2", 00:28:06.957 "trsvcid": "4420" 00:28:06.957 } 00:28:06.957 ], 00:28:06.957 "allow_any_host": true, 00:28:06.957 "hosts": [], 00:28:06.957 "serial_number": "SPDK00000000000001", 00:28:06.957 "model_number": "SPDK bdev Controller", 00:28:06.957 "max_namespaces": 2, 00:28:06.957 "min_cntlid": 1, 00:28:06.957 "max_cntlid": 65519, 00:28:06.957 "namespaces": [ 00:28:06.957 { 00:28:06.957 "nsid": 1, 00:28:06.957 "bdev_name": "Malloc0", 00:28:06.957 "name": "Malloc0", 00:28:06.957 "nguid": "50F248BCDEB4443285DD32ED94BAC550", 00:28:06.957 "uuid": "50f248bc-deb4-4432-85dd-32ed94bac550" 00:28:06.957 }, 00:28:06.957 { 00:28:06.957 "nsid": 2, 00:28:06.957 "bdev_name": "Malloc1", 00:28:06.957 "name": "Malloc1", 00:28:06.957 "nguid": "7161960222754DA5A202AC5B34E2163D", 00:28:06.957 "uuid": "71619602-2275-4da5-a202-ac5b34e2163d" 00:28:06.957 } 00:28:06.957 ] 00:28:06.957 } 00:28:06.957 ] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 456518 00:28:06.957 Asynchronous Event Request test 00:28:06.957 Attaching to 10.0.0.2 00:28:06.957 Attached to 10.0.0.2 00:28:06.957 Registering asynchronous event callbacks... 00:28:06.957 Starting namespace attribute notice tests for all controllers... 00:28:06.957 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:06.957 aer_cb - Changed Namespace 00:28:06.957 Cleaning up... 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.957 rmmod nvme_tcp 00:28:06.957 rmmod nvme_fabrics 00:28:06.957 rmmod nvme_keyring 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 456171 ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 456171 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 456171 ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 456171 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.957 17:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 456171 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 456171' 00:28:07.218 killing process with pid 456171 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 456171 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 456171 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.218 17:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.775 17:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.776 00:28:09.776 real 0m11.692s 00:28:09.776 user 0m8.050s 00:28:09.776 sys 0m6.368s 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.776 ************************************ 00:28:09.776 END TEST nvmf_aer 00:28:09.776 ************************************ 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.776 ************************************ 00:28:09.776 START TEST nvmf_async_init 00:28:09.776 ************************************ 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.776 * Looking for test storage... 00:28:09.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:09.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.776 --rc genhtml_branch_coverage=1 00:28:09.776 --rc genhtml_function_coverage=1 00:28:09.776 --rc genhtml_legend=1 00:28:09.776 --rc geninfo_all_blocks=1 00:28:09.776 --rc geninfo_unexecuted_blocks=1 00:28:09.776 00:28:09.776 ' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:09.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.776 --rc genhtml_branch_coverage=1 00:28:09.776 --rc genhtml_function_coverage=1 00:28:09.776 --rc genhtml_legend=1 00:28:09.776 --rc geninfo_all_blocks=1 00:28:09.776 --rc geninfo_unexecuted_blocks=1 00:28:09.776 00:28:09.776 ' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:09.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.776 --rc genhtml_branch_coverage=1 00:28:09.776 --rc genhtml_function_coverage=1 00:28:09.776 --rc genhtml_legend=1 00:28:09.776 --rc geninfo_all_blocks=1 00:28:09.776 --rc geninfo_unexecuted_blocks=1 00:28:09.776 00:28:09.776 ' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:09.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.776 --rc genhtml_branch_coverage=1 00:28:09.776 --rc genhtml_function_coverage=1 00:28:09.776 --rc genhtml_legend=1 00:28:09.776 --rc geninfo_all_blocks=1 00:28:09.776 --rc geninfo_unexecuted_blocks=1 00:28:09.776 00:28:09.776 ' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.776 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=22a8c69b0fb84264850b0c90d4d6c833 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.777 17:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.919 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:17.920 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:17.920 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:17.920 Found net devices under 0000:31:00.0: cvl_0_0 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:17.920 Found net devices under 0000:31:00.1: cvl_0_1 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.920 17:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:28:17.920 00:28:17.920 --- 10.0.0.2 ping statistics --- 00:28:17.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.920 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:28:17.920 00:28:17.920 --- 10.0.0.1 ping statistics --- 00:28:17.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.920 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=460914 00:28:17.920 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 460914 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 460914 ']' 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.921 17:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.921 [2024-10-08 17:44:09.327966] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:17.921 [2024-10-08 17:44:09.328041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.921 [2024-10-08 17:44:09.413261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.921 [2024-10-08 17:44:09.506832] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.921 [2024-10-08 17:44:09.506896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.921 [2024-10-08 17:44:09.506905] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.921 [2024-10-08 17:44:09.506912] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.921 [2024-10-08 17:44:09.506918] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.921 [2024-10-08 17:44:09.507772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.181 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.181 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:18.181 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:18.181 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.181 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 [2024-10-08 17:44:10.217374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 null0 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 22a8c69b0fb84264850b0c90d4d6c833 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.443 [2024-10-08 17:44:10.277757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.443 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.703 nvme0n1 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.703 [ 00:28:18.703 { 00:28:18.703 "name": "nvme0n1", 00:28:18.703 "aliases": [ 00:28:18.703 "22a8c69b-0fb8-4264-850b-0c90d4d6c833" 00:28:18.703 ], 00:28:18.703 "product_name": "NVMe disk", 00:28:18.703 "block_size": 512, 00:28:18.703 "num_blocks": 2097152, 00:28:18.703 "uuid": "22a8c69b-0fb8-4264-850b-0c90d4d6c833", 00:28:18.703 "numa_id": 0, 00:28:18.703 "assigned_rate_limits": { 00:28:18.703 "rw_ios_per_sec": 0, 00:28:18.703 "rw_mbytes_per_sec": 0, 00:28:18.703 "r_mbytes_per_sec": 0, 00:28:18.703 "w_mbytes_per_sec": 0 00:28:18.703 }, 00:28:18.703 "claimed": false, 00:28:18.703 "zoned": false, 00:28:18.703 "supported_io_types": { 00:28:18.703 "read": true, 00:28:18.703 "write": true, 00:28:18.703 "unmap": false, 00:28:18.703 "flush": true, 00:28:18.703 "reset": true, 00:28:18.703 "nvme_admin": true, 00:28:18.703 "nvme_io": true, 00:28:18.703 "nvme_io_md": false, 00:28:18.703 "write_zeroes": true, 00:28:18.703 "zcopy": false, 00:28:18.703 "get_zone_info": false, 00:28:18.703 "zone_management": false, 00:28:18.703 "zone_append": false, 00:28:18.703 "compare": true, 00:28:18.703 "compare_and_write": true, 00:28:18.703 "abort": true, 00:28:18.703 "seek_hole": false, 00:28:18.703 "seek_data": false, 00:28:18.703 "copy": true, 00:28:18.703 "nvme_iov_md": false 00:28:18.703 }, 00:28:18.703 "memory_domains": [ 00:28:18.703 { 00:28:18.703 "dma_device_id": "system", 00:28:18.703 "dma_device_type": 1 00:28:18.703 } 00:28:18.703 ], 00:28:18.703 "driver_specific": { 00:28:18.703 "nvme": [ 00:28:18.703 { 00:28:18.703 "trid": { 00:28:18.703 "trtype": "TCP", 00:28:18.703 "adrfam": "IPv4", 00:28:18.703 "traddr": "10.0.0.2", 00:28:18.703 "trsvcid": "4420", 00:28:18.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.703 }, 00:28:18.703 "ctrlr_data": { 00:28:18.703 "cntlid": 1, 00:28:18.703 "vendor_id": "0x8086", 00:28:18.703 "model_number": "SPDK bdev Controller", 00:28:18.703 "serial_number": "00000000000000000000", 00:28:18.703 "firmware_revision": "25.01", 00:28:18.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.703 "oacs": { 00:28:18.703 "security": 0, 00:28:18.703 "format": 0, 00:28:18.703 "firmware": 0, 00:28:18.703 "ns_manage": 0 00:28:18.703 }, 00:28:18.703 "multi_ctrlr": true, 00:28:18.703 "ana_reporting": false 00:28:18.703 }, 00:28:18.703 "vs": { 00:28:18.703 "nvme_version": "1.3" 00:28:18.703 }, 00:28:18.703 "ns_data": { 00:28:18.703 "id": 1, 00:28:18.703 "can_share": true 00:28:18.703 } 00:28:18.703 } 00:28:18.703 ], 00:28:18.703 "mp_policy": "active_passive" 00:28:18.703 } 00:28:18.703 } 00:28:18.703 ] 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.703 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.704 [2024-10-08 17:44:10.555583] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:18.704 [2024-10-08 17:44:10.555678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255bc10 (9): Bad file descriptor 00:28:18.704 [2024-10-08 17:44:10.688092] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.704 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.965 [ 00:28:18.965 { 00:28:18.965 "name": "nvme0n1", 00:28:18.965 "aliases": [ 00:28:18.965 "22a8c69b-0fb8-4264-850b-0c90d4d6c833" 00:28:18.965 ], 00:28:18.965 "product_name": "NVMe disk", 00:28:18.965 "block_size": 512, 00:28:18.965 "num_blocks": 2097152, 00:28:18.965 "uuid": "22a8c69b-0fb8-4264-850b-0c90d4d6c833", 00:28:18.965 "numa_id": 0, 00:28:18.965 "assigned_rate_limits": { 00:28:18.965 "rw_ios_per_sec": 0, 00:28:18.965 "rw_mbytes_per_sec": 0, 00:28:18.965 "r_mbytes_per_sec": 0, 00:28:18.965 "w_mbytes_per_sec": 0 00:28:18.965 }, 00:28:18.965 "claimed": false, 00:28:18.965 "zoned": false, 00:28:18.965 "supported_io_types": { 00:28:18.965 "read": true, 00:28:18.965 "write": true, 00:28:18.965 "unmap": false, 00:28:18.965 "flush": true, 00:28:18.965 "reset": true, 00:28:18.965 "nvme_admin": true, 00:28:18.965 "nvme_io": true, 00:28:18.965 "nvme_io_md": false, 00:28:18.965 "write_zeroes": true, 00:28:18.965 "zcopy": false, 00:28:18.965 "get_zone_info": false, 00:28:18.965 "zone_management": false, 00:28:18.965 "zone_append": false, 00:28:18.965 "compare": true, 00:28:18.965 "compare_and_write": true, 00:28:18.965 "abort": true, 00:28:18.965 "seek_hole": false, 00:28:18.965 "seek_data": false, 00:28:18.965 "copy": true, 00:28:18.965 "nvme_iov_md": false 00:28:18.965 }, 00:28:18.965 "memory_domains": [ 00:28:18.965 { 00:28:18.965 "dma_device_id": "system", 00:28:18.965 "dma_device_type": 1 00:28:18.965 } 00:28:18.965 ], 00:28:18.965 "driver_specific": { 00:28:18.965 "nvme": [ 00:28:18.965 { 00:28:18.965 "trid": { 00:28:18.965 "trtype": "TCP", 00:28:18.965 "adrfam": "IPv4", 00:28:18.965 "traddr": "10.0.0.2", 00:28:18.965 "trsvcid": "4420", 00:28:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.965 }, 00:28:18.965 "ctrlr_data": { 00:28:18.965 "cntlid": 2, 00:28:18.965 "vendor_id": "0x8086", 00:28:18.965 "model_number": "SPDK bdev Controller", 00:28:18.965 "serial_number": "00000000000000000000", 00:28:18.965 "firmware_revision": "25.01", 00:28:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.965 "oacs": { 00:28:18.965 "security": 0, 00:28:18.965 "format": 0, 00:28:18.965 "firmware": 0, 00:28:18.965 "ns_manage": 0 00:28:18.965 }, 00:28:18.965 "multi_ctrlr": true, 00:28:18.965 "ana_reporting": false 00:28:18.965 }, 00:28:18.965 "vs": { 00:28:18.965 "nvme_version": "1.3" 00:28:18.965 }, 00:28:18.965 "ns_data": { 00:28:18.965 "id": 1, 00:28:18.965 "can_share": true 00:28:18.965 } 00:28:18.965 } 00:28:18.965 ], 00:28:18.965 "mp_policy": "active_passive" 00:28:18.965 } 00:28:18.965 } 00:28:18.965 ] 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lDMCtJQXQz 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lDMCtJQXQz 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.lDMCtJQXQz 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.965 [2024-10-08 17:44:10.780293] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:18.965 [2024-10-08 17:44:10.780482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.965 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.966 [2024-10-08 17:44:10.804370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.966 nvme0n1 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.966 [ 00:28:18.966 { 00:28:18.966 "name": "nvme0n1", 00:28:18.966 "aliases": [ 00:28:18.966 "22a8c69b-0fb8-4264-850b-0c90d4d6c833" 00:28:18.966 ], 00:28:18.966 "product_name": "NVMe disk", 00:28:18.966 "block_size": 512, 00:28:18.966 "num_blocks": 2097152, 00:28:18.966 "uuid": "22a8c69b-0fb8-4264-850b-0c90d4d6c833", 00:28:18.966 "numa_id": 0, 00:28:18.966 "assigned_rate_limits": { 00:28:18.966 "rw_ios_per_sec": 0, 00:28:18.966 "rw_mbytes_per_sec": 0, 00:28:18.966 "r_mbytes_per_sec": 0, 00:28:18.966 "w_mbytes_per_sec": 0 00:28:18.966 }, 00:28:18.966 "claimed": false, 00:28:18.966 "zoned": false, 00:28:18.966 "supported_io_types": { 00:28:18.966 "read": true, 00:28:18.966 "write": true, 00:28:18.966 "unmap": false, 00:28:18.966 "flush": true, 00:28:18.966 "reset": true, 00:28:18.966 "nvme_admin": true, 00:28:18.966 "nvme_io": true, 00:28:18.966 "nvme_io_md": false, 00:28:18.966 "write_zeroes": true, 00:28:18.966 "zcopy": false, 00:28:18.966 "get_zone_info": false, 00:28:18.966 "zone_management": false, 00:28:18.966 "zone_append": false, 00:28:18.966 "compare": true, 00:28:18.966 "compare_and_write": true, 00:28:18.966 "abort": true, 00:28:18.966 "seek_hole": false, 00:28:18.966 "seek_data": false, 00:28:18.966 "copy": true, 00:28:18.966 "nvme_iov_md": false 00:28:18.966 }, 00:28:18.966 "memory_domains": [ 00:28:18.966 { 00:28:18.966 "dma_device_id": "system", 00:28:18.966 "dma_device_type": 1 00:28:18.966 } 00:28:18.966 ], 00:28:18.966 "driver_specific": { 00:28:18.966 "nvme": [ 00:28:18.966 { 00:28:18.966 "trid": { 00:28:18.966 "trtype": "TCP", 00:28:18.966 "adrfam": "IPv4", 00:28:18.966 "traddr": "10.0.0.2", 00:28:18.966 "trsvcid": "4421", 00:28:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.966 }, 00:28:18.966 "ctrlr_data": { 00:28:18.966 "cntlid": 3, 00:28:18.966 "vendor_id": "0x8086", 00:28:18.966 "model_number": "SPDK bdev Controller", 00:28:18.966 "serial_number": "00000000000000000000", 00:28:18.966 "firmware_revision": "25.01", 00:28:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.966 "oacs": { 00:28:18.966 "security": 0, 00:28:18.966 "format": 0, 00:28:18.966 "firmware": 0, 00:28:18.966 "ns_manage": 0 00:28:18.966 }, 00:28:18.966 "multi_ctrlr": true, 00:28:18.966 "ana_reporting": false 00:28:18.966 }, 00:28:18.966 "vs": { 00:28:18.966 "nvme_version": "1.3" 00:28:18.966 }, 00:28:18.966 "ns_data": { 00:28:18.966 "id": 1, 00:28:18.966 "can_share": true 00:28:18.966 } 00:28:18.966 } 00:28:18.966 ], 00:28:18.966 "mp_policy": "active_passive" 00:28:18.966 } 00:28:18.966 } 00:28:18.966 ] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.lDMCtJQXQz 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.966 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.966 rmmod nvme_tcp 00:28:18.966 rmmod nvme_fabrics 00:28:19.226 rmmod nvme_keyring 00:28:19.226 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.227 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:19.227 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:19.227 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 460914 ']' 00:28:19.227 17:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 460914 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 460914 ']' 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 460914 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 460914 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 460914' 00:28:19.227 killing process with pid 460914 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 460914 00:28:19.227 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 460914 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.487 17:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.399 00:28:21.399 real 0m11.984s 00:28:21.399 user 0m4.301s 00:28:21.399 sys 0m6.260s 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.399 ************************************ 00:28:21.399 END TEST nvmf_async_init 00:28:21.399 ************************************ 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:21.399 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.660 ************************************ 00:28:21.660 START TEST dma 00:28:21.660 ************************************ 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:21.660 * Looking for test storage... 00:28:21.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.660 --rc genhtml_branch_coverage=1 00:28:21.660 --rc genhtml_function_coverage=1 00:28:21.660 --rc genhtml_legend=1 00:28:21.660 --rc geninfo_all_blocks=1 00:28:21.660 --rc geninfo_unexecuted_blocks=1 00:28:21.660 00:28:21.660 ' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.660 --rc genhtml_branch_coverage=1 00:28:21.660 --rc genhtml_function_coverage=1 00:28:21.660 --rc genhtml_legend=1 00:28:21.660 --rc geninfo_all_blocks=1 00:28:21.660 --rc geninfo_unexecuted_blocks=1 00:28:21.660 00:28:21.660 ' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.660 --rc genhtml_branch_coverage=1 00:28:21.660 --rc genhtml_function_coverage=1 00:28:21.660 --rc genhtml_legend=1 00:28:21.660 --rc geninfo_all_blocks=1 00:28:21.660 --rc geninfo_unexecuted_blocks=1 00:28:21.660 00:28:21.660 ' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.660 --rc genhtml_branch_coverage=1 00:28:21.660 --rc genhtml_function_coverage=1 00:28:21.660 --rc genhtml_legend=1 00:28:21.660 --rc geninfo_all_blocks=1 00:28:21.660 --rc geninfo_unexecuted_blocks=1 00:28:21.660 00:28:21.660 ' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.660 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.661 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.661 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.661 17:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:21.921 00:28:21.921 real 0m0.236s 00:28:21.921 user 0m0.149s 00:28:21.921 sys 0m0.102s 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:21.921 ************************************ 00:28:21.921 END TEST dma 00:28:21.921 ************************************ 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.921 ************************************ 00:28:21.921 START TEST nvmf_identify 00:28:21.921 ************************************ 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:21.921 * Looking for test storage... 00:28:21.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:28:21.921 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:22.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.183 --rc genhtml_branch_coverage=1 00:28:22.183 --rc genhtml_function_coverage=1 00:28:22.183 --rc genhtml_legend=1 00:28:22.183 --rc geninfo_all_blocks=1 00:28:22.183 --rc geninfo_unexecuted_blocks=1 00:28:22.183 00:28:22.183 ' 00:28:22.183 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:22.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.183 --rc genhtml_branch_coverage=1 00:28:22.183 --rc genhtml_function_coverage=1 00:28:22.183 --rc genhtml_legend=1 00:28:22.183 --rc geninfo_all_blocks=1 00:28:22.183 --rc geninfo_unexecuted_blocks=1 00:28:22.183 00:28:22.183 ' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.184 --rc genhtml_branch_coverage=1 00:28:22.184 --rc genhtml_function_coverage=1 00:28:22.184 --rc genhtml_legend=1 00:28:22.184 --rc geninfo_all_blocks=1 00:28:22.184 --rc geninfo_unexecuted_blocks=1 00:28:22.184 00:28:22.184 ' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:22.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.184 --rc genhtml_branch_coverage=1 00:28:22.184 --rc genhtml_function_coverage=1 00:28:22.184 --rc genhtml_legend=1 00:28:22.184 --rc geninfo_all_blocks=1 00:28:22.184 --rc geninfo_unexecuted_blocks=1 00:28:22.184 00:28:22.184 ' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:22.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.184 17:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:30.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:30.329 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.329 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:30.330 Found net devices under 0000:31:00.0: cvl_0_0 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:30.330 Found net devices under 0000:31:00.1: cvl_0_1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:28:30.330 00:28:30.330 --- 10.0.0.2 ping statistics --- 00:28:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.330 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:30.330 00:28:30.330 --- 10.0.0.1 ping statistics --- 00:28:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.330 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=465650 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 465650 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 465650 ']' 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.330 17:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.330 [2024-10-08 17:44:21.791513] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:30.330 [2024-10-08 17:44:21.791581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.330 [2024-10-08 17:44:21.880891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.330 [2024-10-08 17:44:21.976937] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.330 [2024-10-08 17:44:21.977006] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.330 [2024-10-08 17:44:21.977015] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.330 [2024-10-08 17:44:21.977022] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.330 [2024-10-08 17:44:21.977028] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.330 [2024-10-08 17:44:21.979472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.330 [2024-10-08 17:44:21.979634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.330 [2024-10-08 17:44:21.979793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.330 [2024-10-08 17:44:21.979793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 [2024-10-08 17:44:22.630564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 Malloc0 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 [2024-10-08 17:44:22.740465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.904 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:30.905 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.905 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.905 [ 00:28:30.905 { 00:28:30.905 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:30.905 "subtype": "Discovery", 00:28:30.905 "listen_addresses": [ 00:28:30.905 { 00:28:30.905 "trtype": "TCP", 00:28:30.905 "adrfam": "IPv4", 00:28:30.905 "traddr": "10.0.0.2", 00:28:30.905 "trsvcid": "4420" 00:28:30.905 } 00:28:30.905 ], 00:28:30.905 "allow_any_host": true, 00:28:30.905 "hosts": [] 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.905 "subtype": "NVMe", 00:28:30.905 "listen_addresses": [ 00:28:30.905 { 00:28:30.905 "trtype": "TCP", 00:28:30.905 "adrfam": "IPv4", 00:28:30.905 "traddr": "10.0.0.2", 00:28:30.905 "trsvcid": "4420" 00:28:30.905 } 00:28:30.905 ], 00:28:30.905 "allow_any_host": true, 00:28:30.905 "hosts": [], 00:28:30.905 "serial_number": "SPDK00000000000001", 00:28:30.905 "model_number": "SPDK bdev Controller", 00:28:30.905 "max_namespaces": 32, 00:28:30.905 "min_cntlid": 1, 00:28:30.905 "max_cntlid": 65519, 00:28:30.905 "namespaces": [ 00:28:30.905 { 00:28:30.905 "nsid": 1, 00:28:30.905 "bdev_name": "Malloc0", 00:28:30.905 "name": "Malloc0", 00:28:30.905 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:30.905 "eui64": "ABCDEF0123456789", 00:28:30.905 "uuid": "9bf27e00-424d-4783-8f9a-4391bf740766" 00:28:30.905 } 00:28:30.905 ] 00:28:30.905 } 00:28:30.905 ] 00:28:30.905 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.905 17:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:30.905 [2024-10-08 17:44:22.805303] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:30.905 [2024-10-08 17:44:22.805346] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465747 ] 00:28:30.905 [2024-10-08 17:44:22.843173] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:30.905 [2024-10-08 17:44:22.843242] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:30.905 [2024-10-08 17:44:22.843248] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:30.905 [2024-10-08 17:44:22.843270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:30.905 [2024-10-08 17:44:22.843282] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:30.905 [2024-10-08 17:44:22.844131] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:30.905 [2024-10-08 17:44:22.844185] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18d5620 0 00:28:30.905 [2024-10-08 17:44:22.857992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:30.905 [2024-10-08 17:44:22.858010] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:30.905 [2024-10-08 17:44:22.858015] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:30.905 [2024-10-08 17:44:22.858019] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:30.905 [2024-10-08 17:44:22.858056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.858063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.858068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.905 [2024-10-08 17:44:22.858086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:30.905 [2024-10-08 17:44:22.858111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.905 [2024-10-08 17:44:22.865989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.905 [2024-10-08 17:44:22.865999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.905 [2024-10-08 17:44:22.866004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.905 [2024-10-08 17:44:22.866022] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:30.905 [2024-10-08 17:44:22.866030] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:30.905 [2024-10-08 17:44:22.866036] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:30.905 [2024-10-08 17:44:22.866053] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.905 [2024-10-08 17:44:22.866070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.905 [2024-10-08 17:44:22.866086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.905 [2024-10-08 17:44:22.866288] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.905 [2024-10-08 17:44:22.866295] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.905 [2024-10-08 17:44:22.866298] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.905 [2024-10-08 17:44:22.866308] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:30.905 [2024-10-08 17:44:22.866316] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:30.905 [2024-10-08 17:44:22.866323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.905 [2024-10-08 17:44:22.866337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.905 [2024-10-08 17:44:22.866348] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.905 [2024-10-08 17:44:22.866546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.905 [2024-10-08 17:44:22.866553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.905 [2024-10-08 17:44:22.866557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.905 [2024-10-08 17:44:22.866571] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:30.905 [2024-10-08 17:44:22.866581] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:30.905 [2024-10-08 17:44:22.866588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866591] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.905 [2024-10-08 17:44:22.866602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.905 [2024-10-08 17:44:22.866612] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.905 [2024-10-08 17:44:22.866838] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.905 [2024-10-08 17:44:22.866845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.905 [2024-10-08 17:44:22.866848] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.905 [2024-10-08 17:44:22.866858] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:30.905 [2024-10-08 17:44:22.866868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.866875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.905 [2024-10-08 17:44:22.866882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.905 [2024-10-08 17:44:22.866892] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.905 [2024-10-08 17:44:22.867061] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.905 [2024-10-08 17:44:22.867068] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.905 [2024-10-08 17:44:22.867071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.905 [2024-10-08 17:44:22.867075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.905 [2024-10-08 17:44:22.867081] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:30.905 [2024-10-08 17:44:22.867086] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:30.906 [2024-10-08 17:44:22.867094] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:30.906 [2024-10-08 17:44:22.867200] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:30.906 [2024-10-08 17:44:22.867205] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:30.906 [2024-10-08 17:44:22.867215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.867229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.906 [2024-10-08 17:44:22.867241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.906 [2024-10-08 17:44:22.867452] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.906 [2024-10-08 17:44:22.867459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.906 [2024-10-08 17:44:22.867463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.906 [2024-10-08 17:44:22.867472] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:30.906 [2024-10-08 17:44:22.867481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867485] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.867495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.906 [2024-10-08 17:44:22.867505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.906 [2024-10-08 17:44:22.867698] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.906 [2024-10-08 17:44:22.867705] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.906 [2024-10-08 17:44:22.867708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.906 [2024-10-08 17:44:22.867717] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:30.906 [2024-10-08 17:44:22.867722] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.867730] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:30.906 [2024-10-08 17:44:22.867745] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.867756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.867760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.867767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.906 [2024-10-08 17:44:22.867777] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.906 [2024-10-08 17:44:22.868031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.906 [2024-10-08 17:44:22.868038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.906 [2024-10-08 17:44:22.868042] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868047] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18d5620): datao=0, datal=4096, cccid=0 00:28:30.906 [2024-10-08 17:44:22.868052] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1935480) on tqpair(0x18d5620): expected_datao=0, payload_size=4096 00:28:30.906 [2024-10-08 17:44:22.868057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868065] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868070] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.906 [2024-10-08 17:44:22.868235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.906 [2024-10-08 17:44:22.868238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.906 [2024-10-08 17:44:22.868254] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:30.906 [2024-10-08 17:44:22.868260] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:30.906 [2024-10-08 17:44:22.868265] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:30.906 [2024-10-08 17:44:22.868270] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:30.906 [2024-10-08 17:44:22.868275] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:30.906 [2024-10-08 17:44:22.868280] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.868289] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.868300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868307] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:30.906 [2024-10-08 17:44:22.868326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.906 [2024-10-08 17:44:22.868521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.906 [2024-10-08 17:44:22.868527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.906 [2024-10-08 17:44:22.868531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:30.906 [2024-10-08 17:44:22.868544] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.906 [2024-10-08 17:44:22.868564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.906 [2024-10-08 17:44:22.868584] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868588] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.906 [2024-10-08 17:44:22.868604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868611] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.906 [2024-10-08 17:44:22.868622] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.868635] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:30.906 [2024-10-08 17:44:22.868642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.906 [2024-10-08 17:44:22.868665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935480, cid 0, qid 0 00:28:30.906 [2024-10-08 17:44:22.868670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935600, cid 1, qid 0 00:28:30.906 [2024-10-08 17:44:22.868675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935780, cid 2, qid 0 00:28:30.906 [2024-10-08 17:44:22.868680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:30.906 [2024-10-08 17:44:22.868685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935a80, cid 4, qid 0 00:28:30.906 [2024-10-08 17:44:22.868918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.906 [2024-10-08 17:44:22.868925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.906 [2024-10-08 17:44:22.868929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935a80) on tqpair=0x18d5620 00:28:30.906 [2024-10-08 17:44:22.868938] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:30.906 [2024-10-08 17:44:22.868944] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:30.906 [2024-10-08 17:44:22.868954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.868958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18d5620) 00:28:30.906 [2024-10-08 17:44:22.868965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.906 [2024-10-08 17:44:22.868988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935a80, cid 4, qid 0 00:28:30.906 [2024-10-08 17:44:22.869171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.906 [2024-10-08 17:44:22.869178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.906 [2024-10-08 17:44:22.869182] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.906 [2024-10-08 17:44:22.869185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18d5620): datao=0, datal=4096, cccid=4 00:28:30.906 [2024-10-08 17:44:22.869190] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1935a80) on tqpair(0x18d5620): expected_datao=0, payload_size=4096 00:28:30.906 [2024-10-08 17:44:22.869194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.907 [2024-10-08 17:44:22.869208] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.907 [2024-10-08 17:44:22.869212] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.913989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.171 [2024-10-08 17:44:22.914003] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.171 [2024-10-08 17:44:22.914007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935a80) on tqpair=0x18d5620 00:28:31.171 [2024-10-08 17:44:22.914026] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:31.171 [2024-10-08 17:44:22.914058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18d5620) 00:28:31.171 [2024-10-08 17:44:22.914078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.171 [2024-10-08 17:44:22.914087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18d5620) 00:28:31.171 [2024-10-08 17:44:22.914101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.171 [2024-10-08 17:44:22.914115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935a80, cid 4, qid 0 00:28:31.171 [2024-10-08 17:44:22.914121] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935c00, cid 5, qid 0 00:28:31.171 [2024-10-08 17:44:22.914381] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.171 [2024-10-08 17:44:22.914388] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.171 [2024-10-08 17:44:22.914392] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18d5620): datao=0, datal=1024, cccid=4 00:28:31.171 [2024-10-08 17:44:22.914400] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1935a80) on tqpair(0x18d5620): expected_datao=0, payload_size=1024 00:28:31.171 [2024-10-08 17:44:22.914404] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914411] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914415] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.171 [2024-10-08 17:44:22.914427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.171 [2024-10-08 17:44:22.914430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.171 [2024-10-08 17:44:22.914434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935c00) on tqpair=0x18d5620 00:28:31.171 [2024-10-08 17:44:22.955171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.172 [2024-10-08 17:44:22.955185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.172 [2024-10-08 17:44:22.955189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955193] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935a80) on tqpair=0x18d5620 00:28:31.172 [2024-10-08 17:44:22.955212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955216] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18d5620) 00:28:31.172 [2024-10-08 17:44:22.955223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.172 [2024-10-08 17:44:22.955238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935a80, cid 4, qid 0 00:28:31.172 [2024-10-08 17:44:22.955449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.172 [2024-10-08 17:44:22.955456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.172 [2024-10-08 17:44:22.955460] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955463] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18d5620): datao=0, datal=3072, cccid=4 00:28:31.172 [2024-10-08 17:44:22.955468] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1935a80) on tqpair(0x18d5620): expected_datao=0, payload_size=3072 00:28:31.172 [2024-10-08 17:44:22.955472] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955480] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955483] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955629] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.172 [2024-10-08 17:44:22.955639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.172 [2024-10-08 17:44:22.955643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935a80) on tqpair=0x18d5620 00:28:31.172 [2024-10-08 17:44:22.955656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18d5620) 00:28:31.172 [2024-10-08 17:44:22.955666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.172 [2024-10-08 17:44:22.955681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935a80, cid 4, qid 0 00:28:31.172 [2024-10-08 17:44:22.955932] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.172 [2024-10-08 17:44:22.955938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.172 [2024-10-08 17:44:22.955941] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955945] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18d5620): datao=0, datal=8, cccid=4 00:28:31.172 [2024-10-08 17:44:22.955950] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1935a80) on tqpair(0x18d5620): expected_datao=0, payload_size=8 00:28:31.172 [2024-10-08 17:44:22.955954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955961] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.955964] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.997989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.172 [2024-10-08 17:44:22.998002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.172 [2024-10-08 17:44:22.998006] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.172 [2024-10-08 17:44:22.998010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935a80) on tqpair=0x18d5620 00:28:31.172 ===================================================== 00:28:31.172 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:31.172 ===================================================== 00:28:31.172 Controller Capabilities/Features 00:28:31.172 ================================ 00:28:31.172 Vendor ID: 0000 00:28:31.172 Subsystem Vendor ID: 0000 00:28:31.172 Serial Number: .................... 00:28:31.172 Model Number: ........................................ 00:28:31.172 Firmware Version: 25.01 00:28:31.172 Recommended Arb Burst: 0 00:28:31.172 IEEE OUI Identifier: 00 00 00 00:28:31.172 Multi-path I/O 00:28:31.172 May have multiple subsystem ports: No 00:28:31.172 May have multiple controllers: No 00:28:31.172 Associated with SR-IOV VF: No 00:28:31.172 Max Data Transfer Size: 131072 00:28:31.172 Max Number of Namespaces: 0 00:28:31.172 Max Number of I/O Queues: 1024 00:28:31.172 NVMe Specification Version (VS): 1.3 00:28:31.172 NVMe Specification Version (Identify): 1.3 00:28:31.172 Maximum Queue Entries: 128 00:28:31.172 Contiguous Queues Required: Yes 00:28:31.172 Arbitration Mechanisms Supported 00:28:31.172 Weighted Round Robin: Not Supported 00:28:31.172 Vendor Specific: Not Supported 00:28:31.172 Reset Timeout: 15000 ms 00:28:31.172 Doorbell Stride: 4 bytes 00:28:31.172 NVM Subsystem Reset: Not Supported 00:28:31.172 Command Sets Supported 00:28:31.172 NVM Command Set: Supported 00:28:31.172 Boot Partition: Not Supported 00:28:31.172 Memory Page Size Minimum: 4096 bytes 00:28:31.172 Memory Page Size Maximum: 4096 bytes 00:28:31.172 Persistent Memory Region: Not Supported 00:28:31.172 Optional Asynchronous Events Supported 00:28:31.172 Namespace Attribute Notices: Not Supported 00:28:31.172 Firmware Activation Notices: Not Supported 00:28:31.172 ANA Change Notices: Not Supported 00:28:31.172 PLE Aggregate Log Change Notices: Not Supported 00:28:31.172 LBA Status Info Alert Notices: Not Supported 00:28:31.172 EGE Aggregate Log Change Notices: Not Supported 00:28:31.172 Normal NVM Subsystem Shutdown event: Not Supported 00:28:31.172 Zone Descriptor Change Notices: Not Supported 00:28:31.172 Discovery Log Change Notices: Supported 00:28:31.172 Controller Attributes 00:28:31.172 128-bit Host Identifier: Not Supported 00:28:31.172 Non-Operational Permissive Mode: Not Supported 00:28:31.172 NVM Sets: Not Supported 00:28:31.172 Read Recovery Levels: Not Supported 00:28:31.172 Endurance Groups: Not Supported 00:28:31.172 Predictable Latency Mode: Not Supported 00:28:31.172 Traffic Based Keep ALive: Not Supported 00:28:31.172 Namespace Granularity: Not Supported 00:28:31.172 SQ Associations: Not Supported 00:28:31.172 UUID List: Not Supported 00:28:31.172 Multi-Domain Subsystem: Not Supported 00:28:31.172 Fixed Capacity Management: Not Supported 00:28:31.172 Variable Capacity Management: Not Supported 00:28:31.172 Delete Endurance Group: Not Supported 00:28:31.172 Delete NVM Set: Not Supported 00:28:31.172 Extended LBA Formats Supported: Not Supported 00:28:31.172 Flexible Data Placement Supported: Not Supported 00:28:31.172 00:28:31.172 Controller Memory Buffer Support 00:28:31.172 ================================ 00:28:31.172 Supported: No 00:28:31.172 00:28:31.172 Persistent Memory Region Support 00:28:31.172 ================================ 00:28:31.172 Supported: No 00:28:31.172 00:28:31.172 Admin Command Set Attributes 00:28:31.172 ============================ 00:28:31.172 Security Send/Receive: Not Supported 00:28:31.172 Format NVM: Not Supported 00:28:31.172 Firmware Activate/Download: Not Supported 00:28:31.172 Namespace Management: Not Supported 00:28:31.172 Device Self-Test: Not Supported 00:28:31.172 Directives: Not Supported 00:28:31.172 NVMe-MI: Not Supported 00:28:31.172 Virtualization Management: Not Supported 00:28:31.172 Doorbell Buffer Config: Not Supported 00:28:31.172 Get LBA Status Capability: Not Supported 00:28:31.172 Command & Feature Lockdown Capability: Not Supported 00:28:31.172 Abort Command Limit: 1 00:28:31.172 Async Event Request Limit: 4 00:28:31.172 Number of Firmware Slots: N/A 00:28:31.172 Firmware Slot 1 Read-Only: N/A 00:28:31.172 Firmware Activation Without Reset: N/A 00:28:31.172 Multiple Update Detection Support: N/A 00:28:31.172 Firmware Update Granularity: No Information Provided 00:28:31.172 Per-Namespace SMART Log: No 00:28:31.172 Asymmetric Namespace Access Log Page: Not Supported 00:28:31.172 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:31.172 Command Effects Log Page: Not Supported 00:28:31.172 Get Log Page Extended Data: Supported 00:28:31.172 Telemetry Log Pages: Not Supported 00:28:31.172 Persistent Event Log Pages: Not Supported 00:28:31.172 Supported Log Pages Log Page: May Support 00:28:31.172 Commands Supported & Effects Log Page: Not Supported 00:28:31.172 Feature Identifiers & Effects Log Page:May Support 00:28:31.172 NVMe-MI Commands & Effects Log Page: May Support 00:28:31.172 Data Area 4 for Telemetry Log: Not Supported 00:28:31.172 Error Log Page Entries Supported: 128 00:28:31.172 Keep Alive: Not Supported 00:28:31.172 00:28:31.172 NVM Command Set Attributes 00:28:31.172 ========================== 00:28:31.172 Submission Queue Entry Size 00:28:31.172 Max: 1 00:28:31.172 Min: 1 00:28:31.172 Completion Queue Entry Size 00:28:31.172 Max: 1 00:28:31.172 Min: 1 00:28:31.172 Number of Namespaces: 0 00:28:31.172 Compare Command: Not Supported 00:28:31.172 Write Uncorrectable Command: Not Supported 00:28:31.172 Dataset Management Command: Not Supported 00:28:31.172 Write Zeroes Command: Not Supported 00:28:31.172 Set Features Save Field: Not Supported 00:28:31.172 Reservations: Not Supported 00:28:31.172 Timestamp: Not Supported 00:28:31.172 Copy: Not Supported 00:28:31.172 Volatile Write Cache: Not Present 00:28:31.172 Atomic Write Unit (Normal): 1 00:28:31.172 Atomic Write Unit (PFail): 1 00:28:31.172 Atomic Compare & Write Unit: 1 00:28:31.172 Fused Compare & Write: Supported 00:28:31.172 Scatter-Gather List 00:28:31.172 SGL Command Set: Supported 00:28:31.172 SGL Keyed: Supported 00:28:31.172 SGL Bit Bucket Descriptor: Not Supported 00:28:31.172 SGL Metadata Pointer: Not Supported 00:28:31.172 Oversized SGL: Not Supported 00:28:31.172 SGL Metadata Address: Not Supported 00:28:31.172 SGL Offset: Supported 00:28:31.172 Transport SGL Data Block: Not Supported 00:28:31.172 Replay Protected Memory Block: Not Supported 00:28:31.172 00:28:31.172 Firmware Slot Information 00:28:31.172 ========================= 00:28:31.172 Active slot: 0 00:28:31.173 00:28:31.173 00:28:31.173 Error Log 00:28:31.173 ========= 00:28:31.173 00:28:31.173 Active Namespaces 00:28:31.173 ================= 00:28:31.173 Discovery Log Page 00:28:31.173 ================== 00:28:31.173 Generation Counter: 2 00:28:31.173 Number of Records: 2 00:28:31.173 Record Format: 0 00:28:31.173 00:28:31.173 Discovery Log Entry 0 00:28:31.173 ---------------------- 00:28:31.173 Transport Type: 3 (TCP) 00:28:31.173 Address Family: 1 (IPv4) 00:28:31.173 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:31.173 Entry Flags: 00:28:31.173 Duplicate Returned Information: 1 00:28:31.173 Explicit Persistent Connection Support for Discovery: 1 00:28:31.173 Transport Requirements: 00:28:31.173 Secure Channel: Not Required 00:28:31.173 Port ID: 0 (0x0000) 00:28:31.173 Controller ID: 65535 (0xffff) 00:28:31.173 Admin Max SQ Size: 128 00:28:31.173 Transport Service Identifier: 4420 00:28:31.173 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:31.173 Transport Address: 10.0.0.2 00:28:31.173 Discovery Log Entry 1 00:28:31.173 ---------------------- 00:28:31.173 Transport Type: 3 (TCP) 00:28:31.173 Address Family: 1 (IPv4) 00:28:31.173 Subsystem Type: 2 (NVM Subsystem) 00:28:31.173 Entry Flags: 00:28:31.173 Duplicate Returned Information: 0 00:28:31.173 Explicit Persistent Connection Support for Discovery: 0 00:28:31.173 Transport Requirements: 00:28:31.173 Secure Channel: Not Required 00:28:31.173 Port ID: 0 (0x0000) 00:28:31.173 Controller ID: 65535 (0xffff) 00:28:31.173 Admin Max SQ Size: 128 00:28:31.173 Transport Service Identifier: 4420 00:28:31.173 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:31.173 Transport Address: 10.0.0.2 [2024-10-08 17:44:22.998124] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:31.173 [2024-10-08 17:44:22.998137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935480) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.173 [2024-10-08 17:44:22.998150] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935600) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.173 [2024-10-08 17:44:22.998160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935780) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.173 [2024-10-08 17:44:22.998169] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.173 [2024-10-08 17:44:22.998183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.998199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.998214] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.998440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.998449] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.998453] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998465] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998469] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998472] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.998479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.998493] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.998690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.998697] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.998700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998710] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:31.173 [2024-10-08 17:44:22.998715] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:31.173 [2024-10-08 17:44:22.998724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998733] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.998740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.998750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.998932] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.998939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.998943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.998958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.998966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.998972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.998993] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.999173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.999180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.999183] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999187] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.999197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999202] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.999212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.999225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.999459] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.999466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.999470] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999473] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.999484] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999491] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.999498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.999508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.999691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.999698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.999702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.999721] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.999736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.999746] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:22.999917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:22.999924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:22.999927] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999931] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:22.999941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:22.999949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.173 [2024-10-08 17:44:22.999956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.173 [2024-10-08 17:44:22.999967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.173 [2024-10-08 17:44:23.000206] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.173 [2024-10-08 17:44:23.000214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.173 [2024-10-08 17:44:23.000217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:23.000221] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.173 [2024-10-08 17:44:23.000232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:23.000236] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.173 [2024-10-08 17:44:23.000239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.000246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.000256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.000484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.000491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.000494] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.000508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.000523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.000533] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.000759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.000766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.000769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.000783] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000787] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.000791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.000797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.000807] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.001034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.001041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.001045] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001049] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.001059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.001075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.001085] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.001296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.001302] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.001306] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.001320] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001325] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.001335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.001345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.001564] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.001577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.001580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.001595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.001609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.001619] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.001806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.001813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.001816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001820] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.001830] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001834] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.001838] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.001845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.001855] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.005985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.005994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.005998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.006002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.006012] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.006016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.006019] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18d5620) 00:28:31.174 [2024-10-08 17:44:23.006026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.006038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1935900, cid 3, qid 0 00:28:31.174 [2024-10-08 17:44:23.006222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.006229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.006232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.006236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1935900) on tqpair=0x18d5620 00:28:31.174 [2024-10-08 17:44:23.006245] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:31.174 00:28:31.174 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:31.174 [2024-10-08 17:44:23.053779] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:31.174 [2024-10-08 17:44:23.053829] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465781 ] 00:28:31.174 [2024-10-08 17:44:23.094999] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:31.174 [2024-10-08 17:44:23.095056] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:31.174 [2024-10-08 17:44:23.095061] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:31.174 [2024-10-08 17:44:23.095082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:31.174 [2024-10-08 17:44:23.095092] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:31.174 [2024-10-08 17:44:23.095759] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:31.174 [2024-10-08 17:44:23.095799] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16dd620 0 00:28:31.174 [2024-10-08 17:44:23.101995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:31.174 [2024-10-08 17:44:23.102012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:31.174 [2024-10-08 17:44:23.102017] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:31.174 [2024-10-08 17:44:23.102020] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:31.174 [2024-10-08 17:44:23.102056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.102062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.102066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.174 [2024-10-08 17:44:23.102080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:31.174 [2024-10-08 17:44:23.102105] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.174 [2024-10-08 17:44:23.109992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.110003] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.110007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.110012] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.174 [2024-10-08 17:44:23.110024] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:31.174 [2024-10-08 17:44:23.110032] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:31.174 [2024-10-08 17:44:23.110037] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:31.174 [2024-10-08 17:44:23.110052] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.110056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.110060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.174 [2024-10-08 17:44:23.110068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.174 [2024-10-08 17:44:23.110084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.174 [2024-10-08 17:44:23.110311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.174 [2024-10-08 17:44:23.110317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.174 [2024-10-08 17:44:23.110321] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.174 [2024-10-08 17:44:23.110325] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.174 [2024-10-08 17:44:23.110330] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:31.174 [2024-10-08 17:44:23.110343] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:31.175 [2024-10-08 17:44:23.110350] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110354] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110358] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.110365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.110376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.110576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.110583] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.110586] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.110595] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:31.175 [2024-10-08 17:44:23.110604] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.110611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110618] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.110625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.110635] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.110823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.110829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.110833] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110837] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.110842] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.110851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.110860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.110867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.110877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.111061] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.111068] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.111071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.111080] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:31.175 [2024-10-08 17:44:23.111085] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.111093] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.111202] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:31.175 [2024-10-08 17:44:23.111206] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.111215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.111229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.111239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.111450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.111456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.111459] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.111468] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:31.175 [2024-10-08 17:44:23.111477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.111491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.111502] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.111707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.111714] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.111717] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.111725] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:31.175 [2024-10-08 17:44:23.111730] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:31.175 [2024-10-08 17:44:23.111738] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:31.175 [2024-10-08 17:44:23.111745] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:31.175 [2024-10-08 17:44:23.111754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.111758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.111765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.175 [2024-10-08 17:44:23.111775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.112031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.175 [2024-10-08 17:44:23.112038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.175 [2024-10-08 17:44:23.112041] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112045] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=4096, cccid=0 00:28:31.175 [2024-10-08 17:44:23.112053] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173d480) on tqpair(0x16dd620): expected_datao=0, payload_size=4096 00:28:31.175 [2024-10-08 17:44:23.112057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112065] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112069] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.112223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.112227] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112231] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.112239] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:31.175 [2024-10-08 17:44:23.112244] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:31.175 [2024-10-08 17:44:23.112248] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:31.175 [2024-10-08 17:44:23.112252] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:31.175 [2024-10-08 17:44:23.112257] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:31.175 [2024-10-08 17:44:23.112262] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:31.175 [2024-10-08 17:44:23.112270] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:31.175 [2024-10-08 17:44:23.112280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.112295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:31.175 [2024-10-08 17:44:23.112307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.175 [2024-10-08 17:44:23.112513] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.175 [2024-10-08 17:44:23.112519] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.175 [2024-10-08 17:44:23.112523] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112527] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.175 [2024-10-08 17:44:23.112534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112537] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.112547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.175 [2024-10-08 17:44:23.112554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112557] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.112567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.175 [2024-10-08 17:44:23.112573] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.112588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.175 [2024-10-08 17:44:23.112595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.175 [2024-10-08 17:44:23.112602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.175 [2024-10-08 17:44:23.112607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.176 [2024-10-08 17:44:23.112612] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.112623] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.112630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.112633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.176 [2024-10-08 17:44:23.112640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.176 [2024-10-08 17:44:23.112652] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d480, cid 0, qid 0 00:28:31.176 [2024-10-08 17:44:23.112657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d600, cid 1, qid 0 00:28:31.176 [2024-10-08 17:44:23.112662] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d780, cid 2, qid 0 00:28:31.176 [2024-10-08 17:44:23.112666] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.176 [2024-10-08 17:44:23.112671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.176 [2024-10-08 17:44:23.112930] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.176 [2024-10-08 17:44:23.112939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.176 [2024-10-08 17:44:23.112942] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.112946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.176 [2024-10-08 17:44:23.112951] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:31.176 [2024-10-08 17:44:23.112956] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.112968] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.112982] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.112989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.112992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.112996] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.176 [2024-10-08 17:44:23.113003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:31.176 [2024-10-08 17:44:23.113014] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.176 [2024-10-08 17:44:23.113217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.176 [2024-10-08 17:44:23.113225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.176 [2024-10-08 17:44:23.113228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.113232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.176 [2024-10-08 17:44:23.113298] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.113308] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.113316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.113320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.176 [2024-10-08 17:44:23.113326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.176 [2024-10-08 17:44:23.113337] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.176 [2024-10-08 17:44:23.113524] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.176 [2024-10-08 17:44:23.113531] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.176 [2024-10-08 17:44:23.113534] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.113538] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=4096, cccid=4 00:28:31.176 [2024-10-08 17:44:23.113543] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173da80) on tqpair(0x16dd620): expected_datao=0, payload_size=4096 00:28:31.176 [2024-10-08 17:44:23.113547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.113561] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.113565] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.157988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.176 [2024-10-08 17:44:23.157999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.176 [2024-10-08 17:44:23.158003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.158007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.176 [2024-10-08 17:44:23.158024] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:31.176 [2024-10-08 17:44:23.158043] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.158054] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:31.176 [2024-10-08 17:44:23.158061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.158065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.176 [2024-10-08 17:44:23.158072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.176 [2024-10-08 17:44:23.158086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.176 [2024-10-08 17:44:23.158274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.176 [2024-10-08 17:44:23.158281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.176 [2024-10-08 17:44:23.158284] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.158288] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=4096, cccid=4 00:28:31.176 [2024-10-08 17:44:23.158292] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173da80) on tqpair(0x16dd620): expected_datao=0, payload_size=4096 00:28:31.176 [2024-10-08 17:44:23.158297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.158310] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.176 [2024-10-08 17:44:23.158315] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.199186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.199190] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199194] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.199207] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.199217] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.199225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.199236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.199248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.441 [2024-10-08 17:44:23.199509] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.441 [2024-10-08 17:44:23.199515] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.441 [2024-10-08 17:44:23.199519] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=4096, cccid=4 00:28:31.441 [2024-10-08 17:44:23.199527] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173da80) on tqpair(0x16dd620): expected_datao=0, payload_size=4096 00:28:31.441 [2024-10-08 17:44:23.199531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199538] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.199542] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.240153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.240156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240161] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.240184] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240193] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240202] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240209] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240214] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240220] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240226] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:31.441 [2024-10-08 17:44:23.240231] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:31.441 [2024-10-08 17:44:23.240236] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:31.441 [2024-10-08 17:44:23.240255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.240270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.240278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.240292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.441 [2024-10-08 17:44:23.240305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.441 [2024-10-08 17:44:23.240310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dc00, cid 5, qid 0 00:28:31.441 [2024-10-08 17:44:23.240423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.240430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.240433] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.240444] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.240450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.240454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dc00) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.240467] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.240478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.240488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dc00, cid 5, qid 0 00:28:31.441 [2024-10-08 17:44:23.240692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.240698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.240702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dc00) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.240715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.240726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.240736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dc00, cid 5, qid 0 00:28:31.441 [2024-10-08 17:44:23.240922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.240929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.240932] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dc00) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.240946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.240950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.240957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.240969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dc00, cid 5, qid 0 00:28:31.441 [2024-10-08 17:44:23.241185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.441 [2024-10-08 17:44:23.241192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.441 [2024-10-08 17:44:23.241195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.241199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dc00) on tqpair=0x16dd620 00:28:31.441 [2024-10-08 17:44:23.241216] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.241220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.241227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.441 [2024-10-08 17:44:23.241235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.441 [2024-10-08 17:44:23.241238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16dd620) 00:28:31.441 [2024-10-08 17:44:23.241245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.442 [2024-10-08 17:44:23.241252] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16dd620) 00:28:31.442 [2024-10-08 17:44:23.241263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.442 [2024-10-08 17:44:23.241271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16dd620) 00:28:31.442 [2024-10-08 17:44:23.241281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.442 [2024-10-08 17:44:23.241293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dc00, cid 5, qid 0 00:28:31.442 [2024-10-08 17:44:23.241298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173da80, cid 4, qid 0 00:28:31.442 [2024-10-08 17:44:23.241303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173dd80, cid 6, qid 0 00:28:31.442 [2024-10-08 17:44:23.241308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173df00, cid 7, qid 0 00:28:31.442 [2024-10-08 17:44:23.241582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.442 [2024-10-08 17:44:23.241588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.442 [2024-10-08 17:44:23.241592] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241596] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=8192, cccid=5 00:28:31.442 [2024-10-08 17:44:23.241600] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173dc00) on tqpair(0x16dd620): expected_datao=0, payload_size=8192 00:28:31.442 [2024-10-08 17:44:23.241605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241690] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241695] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241701] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.442 [2024-10-08 17:44:23.241706] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.442 [2024-10-08 17:44:23.241710] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241713] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=512, cccid=4 00:28:31.442 [2024-10-08 17:44:23.241718] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173da80) on tqpair(0x16dd620): expected_datao=0, payload_size=512 00:28:31.442 [2024-10-08 17:44:23.241725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241747] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241751] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.442 [2024-10-08 17:44:23.241763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.442 [2024-10-08 17:44:23.241766] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241770] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=512, cccid=6 00:28:31.442 [2024-10-08 17:44:23.241774] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173dd80) on tqpair(0x16dd620): expected_datao=0, payload_size=512 00:28:31.442 [2024-10-08 17:44:23.241779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241785] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241789] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:31.442 [2024-10-08 17:44:23.241800] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:31.442 [2024-10-08 17:44:23.241803] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241807] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16dd620): datao=0, datal=4096, cccid=7 00:28:31.442 [2024-10-08 17:44:23.241811] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x173df00) on tqpair(0x16dd620): expected_datao=0, payload_size=4096 00:28:31.442 [2024-10-08 17:44:23.241815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241822] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.241826] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.245987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.442 [2024-10-08 17:44:23.245996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.442 [2024-10-08 17:44:23.245999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.246003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dc00) on tqpair=0x16dd620 00:28:31.442 [2024-10-08 17:44:23.246018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.442 [2024-10-08 17:44:23.246024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.442 [2024-10-08 17:44:23.246028] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.246032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173da80) on tqpair=0x16dd620 00:28:31.442 [2024-10-08 17:44:23.246043] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.442 [2024-10-08 17:44:23.246049] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.442 [2024-10-08 17:44:23.246052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.246056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173dd80) on tqpair=0x16dd620 00:28:31.442 [2024-10-08 17:44:23.246063] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.442 [2024-10-08 17:44:23.246069] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.442 [2024-10-08 17:44:23.246072] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.442 [2024-10-08 17:44:23.246076] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173df00) on tqpair=0x16dd620 00:28:31.442 ===================================================== 00:28:31.442 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.442 ===================================================== 00:28:31.442 Controller Capabilities/Features 00:28:31.442 ================================ 00:28:31.442 Vendor ID: 8086 00:28:31.442 Subsystem Vendor ID: 8086 00:28:31.442 Serial Number: SPDK00000000000001 00:28:31.442 Model Number: SPDK bdev Controller 00:28:31.442 Firmware Version: 25.01 00:28:31.442 Recommended Arb Burst: 6 00:28:31.442 IEEE OUI Identifier: e4 d2 5c 00:28:31.442 Multi-path I/O 00:28:31.442 May have multiple subsystem ports: Yes 00:28:31.442 May have multiple controllers: Yes 00:28:31.442 Associated with SR-IOV VF: No 00:28:31.442 Max Data Transfer Size: 131072 00:28:31.442 Max Number of Namespaces: 32 00:28:31.442 Max Number of I/O Queues: 127 00:28:31.442 NVMe Specification Version (VS): 1.3 00:28:31.442 NVMe Specification Version (Identify): 1.3 00:28:31.442 Maximum Queue Entries: 128 00:28:31.442 Contiguous Queues Required: Yes 00:28:31.442 Arbitration Mechanisms Supported 00:28:31.442 Weighted Round Robin: Not Supported 00:28:31.442 Vendor Specific: Not Supported 00:28:31.442 Reset Timeout: 15000 ms 00:28:31.442 Doorbell Stride: 4 bytes 00:28:31.442 NVM Subsystem Reset: Not Supported 00:28:31.442 Command Sets Supported 00:28:31.442 NVM Command Set: Supported 00:28:31.442 Boot Partition: Not Supported 00:28:31.442 Memory Page Size Minimum: 4096 bytes 00:28:31.442 Memory Page Size Maximum: 4096 bytes 00:28:31.442 Persistent Memory Region: Not Supported 00:28:31.442 Optional Asynchronous Events Supported 00:28:31.442 Namespace Attribute Notices: Supported 00:28:31.442 Firmware Activation Notices: Not Supported 00:28:31.442 ANA Change Notices: Not Supported 00:28:31.442 PLE Aggregate Log Change Notices: Not Supported 00:28:31.442 LBA Status Info Alert Notices: Not Supported 00:28:31.442 EGE Aggregate Log Change Notices: Not Supported 00:28:31.442 Normal NVM Subsystem Shutdown event: Not Supported 00:28:31.442 Zone Descriptor Change Notices: Not Supported 00:28:31.442 Discovery Log Change Notices: Not Supported 00:28:31.442 Controller Attributes 00:28:31.442 128-bit Host Identifier: Supported 00:28:31.442 Non-Operational Permissive Mode: Not Supported 00:28:31.442 NVM Sets: Not Supported 00:28:31.442 Read Recovery Levels: Not Supported 00:28:31.442 Endurance Groups: Not Supported 00:28:31.442 Predictable Latency Mode: Not Supported 00:28:31.442 Traffic Based Keep ALive: Not Supported 00:28:31.442 Namespace Granularity: Not Supported 00:28:31.442 SQ Associations: Not Supported 00:28:31.442 UUID List: Not Supported 00:28:31.442 Multi-Domain Subsystem: Not Supported 00:28:31.442 Fixed Capacity Management: Not Supported 00:28:31.442 Variable Capacity Management: Not Supported 00:28:31.442 Delete Endurance Group: Not Supported 00:28:31.442 Delete NVM Set: Not Supported 00:28:31.442 Extended LBA Formats Supported: Not Supported 00:28:31.442 Flexible Data Placement Supported: Not Supported 00:28:31.442 00:28:31.442 Controller Memory Buffer Support 00:28:31.442 ================================ 00:28:31.442 Supported: No 00:28:31.442 00:28:31.442 Persistent Memory Region Support 00:28:31.442 ================================ 00:28:31.442 Supported: No 00:28:31.442 00:28:31.442 Admin Command Set Attributes 00:28:31.442 ============================ 00:28:31.442 Security Send/Receive: Not Supported 00:28:31.442 Format NVM: Not Supported 00:28:31.442 Firmware Activate/Download: Not Supported 00:28:31.442 Namespace Management: Not Supported 00:28:31.442 Device Self-Test: Not Supported 00:28:31.442 Directives: Not Supported 00:28:31.442 NVMe-MI: Not Supported 00:28:31.442 Virtualization Management: Not Supported 00:28:31.442 Doorbell Buffer Config: Not Supported 00:28:31.442 Get LBA Status Capability: Not Supported 00:28:31.442 Command & Feature Lockdown Capability: Not Supported 00:28:31.442 Abort Command Limit: 4 00:28:31.442 Async Event Request Limit: 4 00:28:31.442 Number of Firmware Slots: N/A 00:28:31.442 Firmware Slot 1 Read-Only: N/A 00:28:31.442 Firmware Activation Without Reset: N/A 00:28:31.442 Multiple Update Detection Support: N/A 00:28:31.442 Firmware Update Granularity: No Information Provided 00:28:31.442 Per-Namespace SMART Log: No 00:28:31.442 Asymmetric Namespace Access Log Page: Not Supported 00:28:31.442 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:31.442 Command Effects Log Page: Supported 00:28:31.443 Get Log Page Extended Data: Supported 00:28:31.443 Telemetry Log Pages: Not Supported 00:28:31.443 Persistent Event Log Pages: Not Supported 00:28:31.443 Supported Log Pages Log Page: May Support 00:28:31.443 Commands Supported & Effects Log Page: Not Supported 00:28:31.443 Feature Identifiers & Effects Log Page:May Support 00:28:31.443 NVMe-MI Commands & Effects Log Page: May Support 00:28:31.443 Data Area 4 for Telemetry Log: Not Supported 00:28:31.443 Error Log Page Entries Supported: 128 00:28:31.443 Keep Alive: Supported 00:28:31.443 Keep Alive Granularity: 10000 ms 00:28:31.443 00:28:31.443 NVM Command Set Attributes 00:28:31.443 ========================== 00:28:31.443 Submission Queue Entry Size 00:28:31.443 Max: 64 00:28:31.443 Min: 64 00:28:31.443 Completion Queue Entry Size 00:28:31.443 Max: 16 00:28:31.443 Min: 16 00:28:31.443 Number of Namespaces: 32 00:28:31.443 Compare Command: Supported 00:28:31.443 Write Uncorrectable Command: Not Supported 00:28:31.443 Dataset Management Command: Supported 00:28:31.443 Write Zeroes Command: Supported 00:28:31.443 Set Features Save Field: Not Supported 00:28:31.443 Reservations: Supported 00:28:31.443 Timestamp: Not Supported 00:28:31.443 Copy: Supported 00:28:31.443 Volatile Write Cache: Present 00:28:31.443 Atomic Write Unit (Normal): 1 00:28:31.443 Atomic Write Unit (PFail): 1 00:28:31.443 Atomic Compare & Write Unit: 1 00:28:31.443 Fused Compare & Write: Supported 00:28:31.443 Scatter-Gather List 00:28:31.443 SGL Command Set: Supported 00:28:31.443 SGL Keyed: Supported 00:28:31.443 SGL Bit Bucket Descriptor: Not Supported 00:28:31.443 SGL Metadata Pointer: Not Supported 00:28:31.443 Oversized SGL: Not Supported 00:28:31.443 SGL Metadata Address: Not Supported 00:28:31.443 SGL Offset: Supported 00:28:31.443 Transport SGL Data Block: Not Supported 00:28:31.443 Replay Protected Memory Block: Not Supported 00:28:31.443 00:28:31.443 Firmware Slot Information 00:28:31.443 ========================= 00:28:31.443 Active slot: 1 00:28:31.443 Slot 1 Firmware Revision: 25.01 00:28:31.443 00:28:31.443 00:28:31.443 Commands Supported and Effects 00:28:31.443 ============================== 00:28:31.443 Admin Commands 00:28:31.443 -------------- 00:28:31.443 Get Log Page (02h): Supported 00:28:31.443 Identify (06h): Supported 00:28:31.443 Abort (08h): Supported 00:28:31.443 Set Features (09h): Supported 00:28:31.443 Get Features (0Ah): Supported 00:28:31.443 Asynchronous Event Request (0Ch): Supported 00:28:31.443 Keep Alive (18h): Supported 00:28:31.443 I/O Commands 00:28:31.443 ------------ 00:28:31.443 Flush (00h): Supported LBA-Change 00:28:31.443 Write (01h): Supported LBA-Change 00:28:31.443 Read (02h): Supported 00:28:31.443 Compare (05h): Supported 00:28:31.443 Write Zeroes (08h): Supported LBA-Change 00:28:31.443 Dataset Management (09h): Supported LBA-Change 00:28:31.443 Copy (19h): Supported LBA-Change 00:28:31.443 00:28:31.443 Error Log 00:28:31.443 ========= 00:28:31.443 00:28:31.443 Arbitration 00:28:31.443 =========== 00:28:31.443 Arbitration Burst: 1 00:28:31.443 00:28:31.443 Power Management 00:28:31.443 ================ 00:28:31.443 Number of Power States: 1 00:28:31.443 Current Power State: Power State #0 00:28:31.443 Power State #0: 00:28:31.443 Max Power: 0.00 W 00:28:31.443 Non-Operational State: Operational 00:28:31.443 Entry Latency: Not Reported 00:28:31.443 Exit Latency: Not Reported 00:28:31.443 Relative Read Throughput: 0 00:28:31.443 Relative Read Latency: 0 00:28:31.443 Relative Write Throughput: 0 00:28:31.443 Relative Write Latency: 0 00:28:31.443 Idle Power: Not Reported 00:28:31.443 Active Power: Not Reported 00:28:31.443 Non-Operational Permissive Mode: Not Supported 00:28:31.443 00:28:31.443 Health Information 00:28:31.443 ================== 00:28:31.443 Critical Warnings: 00:28:31.443 Available Spare Space: OK 00:28:31.443 Temperature: OK 00:28:31.443 Device Reliability: OK 00:28:31.443 Read Only: No 00:28:31.443 Volatile Memory Backup: OK 00:28:31.443 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:31.443 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:31.443 Available Spare: 0% 00:28:31.443 Available Spare Threshold: 0% 00:28:31.443 Life Percentage Used:[2024-10-08 17:44:23.246185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16dd620) 00:28:31.443 [2024-10-08 17:44:23.246197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.443 [2024-10-08 17:44:23.246213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173df00, cid 7, qid 0 00:28:31.443 [2024-10-08 17:44:23.246439] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.443 [2024-10-08 17:44:23.246445] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.443 [2024-10-08 17:44:23.246449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173df00) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246491] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:31.443 [2024-10-08 17:44:23.246501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d480) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.443 [2024-10-08 17:44:23.246513] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d600) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.443 [2024-10-08 17:44:23.246523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d780) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.443 [2024-10-08 17:44:23.246532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.443 [2024-10-08 17:44:23.246546] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246550] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.443 [2024-10-08 17:44:23.246560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.443 [2024-10-08 17:44:23.246572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.443 [2024-10-08 17:44:23.246775] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.443 [2024-10-08 17:44:23.246783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.443 [2024-10-08 17:44:23.246786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.246797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246801] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.246805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.443 [2024-10-08 17:44:23.246811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.443 [2024-10-08 17:44:23.246825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.443 [2024-10-08 17:44:23.247054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.443 [2024-10-08 17:44:23.247061] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.443 [2024-10-08 17:44:23.247064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.247073] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:31.443 [2024-10-08 17:44:23.247078] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:31.443 [2024-10-08 17:44:23.247087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.443 [2024-10-08 17:44:23.247104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.443 [2024-10-08 17:44:23.247115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.443 [2024-10-08 17:44:23.247310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.443 [2024-10-08 17:44:23.247316] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.443 [2024-10-08 17:44:23.247319] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247323] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.247334] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247338] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247341] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.443 [2024-10-08 17:44:23.247348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.443 [2024-10-08 17:44:23.247358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.443 [2024-10-08 17:44:23.247555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.443 [2024-10-08 17:44:23.247562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.443 [2024-10-08 17:44:23.247565] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247569] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.443 [2024-10-08 17:44:23.247579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247583] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.443 [2024-10-08 17:44:23.247586] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.247593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.247603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.247804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.247810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.247814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.247818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.247828] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.247832] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.247835] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.247842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.247852] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.248048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.248054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.248058] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.248072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.248089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.248099] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.248297] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.248303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.248307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.248320] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248324] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.248334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.248344] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.248535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.248543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.248546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.248560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.248574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.248584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.248785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.248792] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.248796] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.248810] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.248817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.248824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.248834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.249032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.249039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.249042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.249056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249063] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.249073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.249083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.249291] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.249298] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.249301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249305] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.249314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.249329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.249339] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.249544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.249550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.249553] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.249567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249571] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249574] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.249581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.249591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.249795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.249801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.249805] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249809] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.249818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249822] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.249826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.249832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.249842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.253989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.253998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.254002] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.254006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.254016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.254020] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.254023] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16dd620) 00:28:31.444 [2024-10-08 17:44:23.254030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.444 [2024-10-08 17:44:23.254045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x173d900, cid 3, qid 0 00:28:31.444 [2024-10-08 17:44:23.254228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:31.444 [2024-10-08 17:44:23.254235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:31.444 [2024-10-08 17:44:23.254238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:31.444 [2024-10-08 17:44:23.254242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x173d900) on tqpair=0x16dd620 00:28:31.444 [2024-10-08 17:44:23.254250] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:31.444 0% 00:28:31.444 Data Units Read: 0 00:28:31.444 Data Units Written: 0 00:28:31.444 Host Read Commands: 0 00:28:31.444 Host Write Commands: 0 00:28:31.444 Controller Busy Time: 0 minutes 00:28:31.444 Power Cycles: 0 00:28:31.444 Power On Hours: 0 hours 00:28:31.444 Unsafe Shutdowns: 0 00:28:31.444 Unrecoverable Media Errors: 0 00:28:31.444 Lifetime Error Log Entries: 0 00:28:31.444 Warning Temperature Time: 0 minutes 00:28:31.444 Critical Temperature Time: 0 minutes 00:28:31.444 00:28:31.444 Number of Queues 00:28:31.444 ================ 00:28:31.444 Number of I/O Submission Queues: 127 00:28:31.444 Number of I/O Completion Queues: 127 00:28:31.444 00:28:31.444 Active Namespaces 00:28:31.444 ================= 00:28:31.444 Namespace ID:1 00:28:31.444 Error Recovery Timeout: Unlimited 00:28:31.444 Command Set Identifier: NVM (00h) 00:28:31.444 Deallocate: Supported 00:28:31.444 Deallocated/Unwritten Error: Not Supported 00:28:31.444 Deallocated Read Value: Unknown 00:28:31.444 Deallocate in Write Zeroes: Not Supported 00:28:31.444 Deallocated Guard Field: 0xFFFF 00:28:31.444 Flush: Supported 00:28:31.444 Reservation: Supported 00:28:31.444 Namespace Sharing Capabilities: Multiple Controllers 00:28:31.445 Size (in LBAs): 131072 (0GiB) 00:28:31.445 Capacity (in LBAs): 131072 (0GiB) 00:28:31.445 Utilization (in LBAs): 131072 (0GiB) 00:28:31.445 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:31.445 EUI64: ABCDEF0123456789 00:28:31.445 UUID: 9bf27e00-424d-4783-8f9a-4391bf740766 00:28:31.445 Thin Provisioning: Not Supported 00:28:31.445 Per-NS Atomic Units: Yes 00:28:31.445 Atomic Boundary Size (Normal): 0 00:28:31.445 Atomic Boundary Size (PFail): 0 00:28:31.445 Atomic Boundary Offset: 0 00:28:31.445 Maximum Single Source Range Length: 65535 00:28:31.445 Maximum Copy Length: 65535 00:28:31.445 Maximum Source Range Count: 1 00:28:31.445 NGUID/EUI64 Never Reused: No 00:28:31.445 Namespace Write Protected: No 00:28:31.445 Number of LBA Formats: 1 00:28:31.445 Current LBA Format: LBA Format #00 00:28:31.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:31.445 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.445 rmmod nvme_tcp 00:28:31.445 rmmod nvme_fabrics 00:28:31.445 rmmod nvme_keyring 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 465650 ']' 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 465650 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 465650 ']' 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 465650 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.445 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465650 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465650' 00:28:31.706 killing process with pid 465650 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 465650 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 465650 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.706 17:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.251 00:28:34.251 real 0m11.994s 00:28:34.251 user 0m8.898s 00:28:34.251 sys 0m6.337s 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:34.251 ************************************ 00:28:34.251 END TEST nvmf_identify 00:28:34.251 ************************************ 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.251 ************************************ 00:28:34.251 START TEST nvmf_perf 00:28:34.251 ************************************ 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:34.251 * Looking for test storage... 00:28:34.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:34.251 17:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.251 --rc genhtml_branch_coverage=1 00:28:34.251 --rc genhtml_function_coverage=1 00:28:34.251 --rc genhtml_legend=1 00:28:34.251 --rc geninfo_all_blocks=1 00:28:34.251 --rc geninfo_unexecuted_blocks=1 00:28:34.251 00:28:34.251 ' 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.251 --rc genhtml_branch_coverage=1 00:28:34.251 --rc genhtml_function_coverage=1 00:28:34.251 --rc genhtml_legend=1 00:28:34.251 --rc geninfo_all_blocks=1 00:28:34.251 --rc geninfo_unexecuted_blocks=1 00:28:34.251 00:28:34.251 ' 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.251 --rc genhtml_branch_coverage=1 00:28:34.251 --rc genhtml_function_coverage=1 00:28:34.251 --rc genhtml_legend=1 00:28:34.251 --rc geninfo_all_blocks=1 00:28:34.251 --rc geninfo_unexecuted_blocks=1 00:28:34.251 00:28:34.251 ' 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.251 --rc genhtml_branch_coverage=1 00:28:34.251 --rc genhtml_function_coverage=1 00:28:34.251 --rc genhtml_legend=1 00:28:34.251 --rc geninfo_all_blocks=1 00:28:34.251 --rc geninfo_unexecuted_blocks=1 00:28:34.251 00:28:34.251 ' 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.251 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:34.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.252 17:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:42.390 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:42.390 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:42.390 Found net devices under 0000:31:00.0: cvl_0_0 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.390 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:42.391 Found net devices under 0000:31:00.1: cvl_0_1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:28:42.391 00:28:42.391 --- 10.0.0.2 ping statistics --- 00:28:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.391 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:28:42.391 00:28:42.391 --- 10.0.0.1 ping statistics --- 00:28:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.391 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=470154 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 470154 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 470154 ']' 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.391 17:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:42.391 [2024-10-08 17:44:33.839554] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:28:42.391 [2024-10-08 17:44:33.839615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.391 [2024-10-08 17:44:33.906476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.391 [2024-10-08 17:44:33.992187] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.391 [2024-10-08 17:44:33.992247] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.391 [2024-10-08 17:44:33.992253] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.391 [2024-10-08 17:44:33.992259] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.391 [2024-10-08 17:44:33.992264] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.391 [2024-10-08 17:44:33.997005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.391 [2024-10-08 17:44:33.997327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.391 [2024-10-08 17:44:33.997457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.391 [2024-10-08 17:44:33.997459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:42.391 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:42.966 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:42.966 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:42.966 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:42.966 17:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:43.227 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:43.227 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:43.227 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:43.227 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:43.227 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:43.487 [2024-10-08 17:44:35.287990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.487 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.747 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:43.747 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.748 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:43.748 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:44.009 17:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.269 [2024-10-08 17:44:36.083778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.269 17:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.529 17:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:44.529 17:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:44.529 17:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:44.529 17:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:45.913 Initializing NVMe Controllers 00:28:45.913 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:45.913 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:45.913 Initialization complete. Launching workers. 00:28:45.913 ======================================================== 00:28:45.913 Latency(us) 00:28:45.913 Device Information : IOPS MiB/s Average min max 00:28:45.913 PCIE (0000:65:00.0) NSID 1 from core 0: 77507.97 302.77 412.05 13.33 5403.55 00:28:45.913 ======================================================== 00:28:45.913 Total : 77507.97 302.77 412.05 13.33 5403.55 00:28:45.913 00:28:45.913 17:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.296 Initializing NVMe Controllers 00:28:47.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:47.296 Initialization complete. Launching workers. 00:28:47.296 ======================================================== 00:28:47.296 Latency(us) 00:28:47.296 Device Information : IOPS MiB/s Average min max 00:28:47.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 10872.07 262.77 45990.29 00:28:47.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19717.64 6983.92 47904.30 00:28:47.296 ======================================================== 00:28:47.296 Total : 143.00 0.56 14026.78 262.77 47904.30 00:28:47.296 00:28:47.296 17:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:48.681 Initializing NVMe Controllers 00:28:48.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:48.681 Initialization complete. Launching workers. 00:28:48.681 ======================================================== 00:28:48.681 Latency(us) 00:28:48.681 Device Information : IOPS MiB/s Average min max 00:28:48.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11696.43 45.69 2740.75 423.00 7687.14 00:28:48.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.81 15.03 8351.21 5415.74 15836.25 00:28:48.681 ======================================================== 00:28:48.681 Total : 15545.24 60.72 4129.83 423.00 15836.25 00:28:48.681 00:28:48.681 17:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:48.681 17:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:48.681 17:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.223 Initializing NVMe Controllers 00:28:51.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.224 Controller IO queue size 128, less than required. 00:28:51.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:51.224 Controller IO queue size 128, less than required. 00:28:51.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:51.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:51.224 Initialization complete. Launching workers. 00:28:51.224 ======================================================== 00:28:51.224 Latency(us) 00:28:51.224 Device Information : IOPS MiB/s Average min max 00:28:51.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2036.49 509.12 63710.79 38057.63 114777.47 00:28:51.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.96 150.24 225347.88 64184.32 360724.43 00:28:51.224 ======================================================== 00:28:51.224 Total : 2637.45 659.36 100541.05 38057.63 360724.43 00:28:51.224 00:28:51.224 17:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:51.224 No valid NVMe controllers or AIO or URING devices found 00:28:51.224 Initializing NVMe Controllers 00:28:51.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.224 Controller IO queue size 128, less than required. 00:28:51.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:51.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:51.224 Controller IO queue size 128, less than required. 00:28:51.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:51.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:51.224 WARNING: Some requested NVMe devices were skipped 00:28:51.224 17:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:53.767 Initializing NVMe Controllers 00:28:53.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.767 Controller IO queue size 128, less than required. 00:28:53.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.767 Controller IO queue size 128, less than required. 00:28:53.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.767 Initialization complete. Launching workers. 00:28:53.767 00:28:53.767 ==================== 00:28:53.767 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:53.767 TCP transport: 00:28:53.767 polls: 44321 00:28:53.767 idle_polls: 29004 00:28:53.767 sock_completions: 15317 00:28:53.767 nvme_completions: 7413 00:28:53.767 submitted_requests: 11136 00:28:53.767 queued_requests: 1 00:28:53.767 00:28:53.767 ==================== 00:28:53.767 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:53.768 TCP transport: 00:28:53.768 polls: 45935 00:28:53.768 idle_polls: 29614 00:28:53.768 sock_completions: 16321 00:28:53.768 nvme_completions: 6951 00:28:53.768 submitted_requests: 10420 00:28:53.768 queued_requests: 1 00:28:53.768 ======================================================== 00:28:53.768 Latency(us) 00:28:53.768 Device Information : IOPS MiB/s Average min max 00:28:53.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1849.85 462.46 70561.62 37984.00 131365.78 00:28:53.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1734.55 433.64 74378.16 29802.59 135755.65 00:28:53.768 ======================================================== 00:28:53.768 Total : 3584.40 896.10 72408.50 29802.59 135755.65 00:28:53.768 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.768 rmmod nvme_tcp 00:28:53.768 rmmod nvme_fabrics 00:28:53.768 rmmod nvme_keyring 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 470154 ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 470154 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 470154 ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 470154 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470154 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470154' 00:28:53.768 killing process with pid 470154 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 470154 00:28:53.768 17:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 470154 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.314 17:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.226 17:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.226 00:28:58.226 real 0m23.976s 00:28:58.226 user 0m56.310s 00:28:58.226 sys 0m8.752s 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.227 ************************************ 00:28:58.227 END TEST nvmf_perf 00:28:58.227 ************************************ 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.227 ************************************ 00:28:58.227 START TEST nvmf_fio_host 00:28:58.227 ************************************ 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:58.227 * Looking for test storage... 00:28:58.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.227 17:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.227 --rc genhtml_branch_coverage=1 00:28:58.227 --rc genhtml_function_coverage=1 00:28:58.227 --rc genhtml_legend=1 00:28:58.227 --rc geninfo_all_blocks=1 00:28:58.227 --rc geninfo_unexecuted_blocks=1 00:28:58.227 00:28:58.227 ' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.227 --rc genhtml_branch_coverage=1 00:28:58.227 --rc genhtml_function_coverage=1 00:28:58.227 --rc genhtml_legend=1 00:28:58.227 --rc geninfo_all_blocks=1 00:28:58.227 --rc geninfo_unexecuted_blocks=1 00:28:58.227 00:28:58.227 ' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.227 --rc genhtml_branch_coverage=1 00:28:58.227 --rc genhtml_function_coverage=1 00:28:58.227 --rc genhtml_legend=1 00:28:58.227 --rc geninfo_all_blocks=1 00:28:58.227 --rc geninfo_unexecuted_blocks=1 00:28:58.227 00:28:58.227 ' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.227 --rc genhtml_branch_coverage=1 00:28:58.227 --rc genhtml_function_coverage=1 00:28:58.227 --rc genhtml_legend=1 00:28:58.227 --rc geninfo_all_blocks=1 00:28:58.227 --rc geninfo_unexecuted_blocks=1 00:28:58.227 00:28:58.227 ' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.227 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.228 17:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.372 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.372 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.372 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:06.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:06.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:06.373 Found net devices under 0000:31:00.0: cvl_0_0 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:06.373 Found net devices under 0000:31:00.1: cvl_0_1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:29:06.373 00:29:06.373 --- 10.0.0.2 ping statistics --- 00:29:06.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.373 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:29:06.373 00:29:06.373 --- 10.0.0.1 ping statistics --- 00:29:06.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.373 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.373 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=477258 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 477258 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 477258 ']' 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.374 17:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.374 [2024-10-08 17:44:57.839406] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:29:06.374 [2024-10-08 17:44:57.839468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.374 [2024-10-08 17:44:57.929559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.374 [2024-10-08 17:44:58.023853] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.374 [2024-10-08 17:44:58.023919] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.374 [2024-10-08 17:44:58.023928] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.374 [2024-10-08 17:44:58.023935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.374 [2024-10-08 17:44:58.023941] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.374 [2024-10-08 17:44:58.026452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.374 [2024-10-08 17:44:58.026615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.374 [2024-10-08 17:44:58.026774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.374 [2024-10-08 17:44:58.026776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:06.947 [2024-10-08 17:44:58.823291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.947 17:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:07.209 Malloc1 00:29:07.209 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.470 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:07.731 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.731 [2024-10-08 17:44:59.696349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:07.993 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.283 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:08.283 17:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:08.551 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:08.551 fio-3.35 00:29:08.551 Starting 1 thread 00:29:11.097 00:29:11.097 test: (groupid=0, jobs=1): err= 0: pid=478007: Tue Oct 8 17:45:02 2024 00:29:11.097 read: IOPS=13.1k, BW=51.3MiB/s (53.8MB/s)(103MiB/2005msec) 00:29:11.097 slat (usec): min=2, max=283, avg= 2.23, stdev= 2.48 00:29:11.097 clat (usec): min=3416, max=9259, avg=5362.12, stdev=392.35 00:29:11.097 lat (usec): min=3450, max=9261, avg=5364.35, stdev=392.39 00:29:11.097 clat percentiles (usec): 00:29:11.097 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:29:11.097 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 00:29:11.097 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5997], 00:29:11.097 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7635], 99.95th=[ 8029], 00:29:11.097 | 99.99th=[ 9110] 00:29:11.097 bw ( KiB/s): min=51552, max=53000, per=99.98%, avg=52522.00, stdev=655.88, samples=4 00:29:11.097 iops : min=12888, max=13250, avg=13130.50, stdev=163.97, samples=4 00:29:11.097 write: IOPS=13.1k, BW=51.3MiB/s (53.8MB/s)(103MiB/2005msec); 0 zone resets 00:29:11.097 slat (usec): min=2, max=272, avg= 2.30, stdev= 1.86 00:29:11.097 clat (usec): min=2983, max=7990, avg=4334.55, stdev=317.14 00:29:11.097 lat (usec): min=3001, max=7992, avg=4336.85, stdev=317.27 00:29:11.097 clat percentiles (usec): 00:29:11.097 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4080], 00:29:11.097 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:29:11.097 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:29:11.097 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 6194], 99.95th=[ 6652], 00:29:11.097 | 99.99th=[ 7898] 00:29:11.097 bw ( KiB/s): min=51848, max=52992, per=100.00%, avg=52558.00, stdev=494.55, samples=4 00:29:11.097 iops : min=12962, max=13248, avg=13139.50, stdev=123.64, samples=4 00:29:11.097 lat (msec) : 4=6.37%, 10=93.63% 00:29:11.097 cpu : usr=75.45%, sys=23.45%, ctx=29, majf=0, minf=16 00:29:11.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:11.097 issued rwts: total=26332,26335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:11.097 00:29:11.097 Run status group 0 (all jobs): 00:29:11.097 READ: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=103MiB (108MB), run=2005-2005msec 00:29:11.097 WRITE: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=103MiB (108MB), run=2005-2005msec 00:29:11.097 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:11.098 17:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:11.358 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:11.358 fio-3.35 00:29:11.358 Starting 1 thread 00:29:13.902 00:29:13.902 test: (groupid=0, jobs=1): err= 0: pid=478727: Tue Oct 8 17:45:05 2024 00:29:13.902 read: IOPS=9653, BW=151MiB/s (158MB/s)(302MiB/2005msec) 00:29:13.902 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.55 00:29:13.902 clat (usec): min=990, max=14546, avg=8001.46, stdev=1952.88 00:29:13.902 lat (usec): min=993, max=14550, avg=8005.07, stdev=1953.00 00:29:13.902 clat percentiles (usec): 00:29:13.902 | 1.00th=[ 4047], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6194], 00:29:13.902 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8455], 00:29:13.902 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11076], 00:29:13.902 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13829], 99.95th=[14091], 00:29:13.902 | 99.99th=[14484] 00:29:13.902 bw ( KiB/s): min=74080, max=82304, per=50.03%, avg=77280.00, stdev=3529.39, samples=4 00:29:13.902 iops : min= 4630, max= 5144, avg=4830.00, stdev=220.59, samples=4 00:29:13.902 write: IOPS=5746, BW=89.8MiB/s (94.2MB/s)(158MiB/1758msec); 0 zone resets 00:29:13.902 slat (usec): min=39, max=407, avg=40.93, stdev= 7.52 00:29:13.902 clat (usec): min=3175, max=14672, avg=9031.73, stdev=1339.91 00:29:13.902 lat (usec): min=3215, max=14789, avg=9072.66, stdev=1341.53 00:29:13.902 clat percentiles (usec): 00:29:13.902 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7898], 00:29:13.902 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:29:13.902 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:29:13.902 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14484], 99.95th=[14484], 00:29:13.902 | 99.99th=[14615] 00:29:13.902 bw ( KiB/s): min=76736, max=85984, per=87.24%, avg=80216.00, stdev=4127.66, samples=4 00:29:13.902 iops : min= 4796, max= 5374, avg=5013.50, stdev=257.98, samples=4 00:29:13.902 lat (usec) : 1000=0.01% 00:29:13.902 lat (msec) : 2=0.01%, 4=0.65%, 10=79.08%, 20=20.27% 00:29:13.902 cpu : usr=85.83%, sys=13.12%, ctx=10, majf=0, minf=26 00:29:13.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:13.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:13.902 issued rwts: total=19356,10103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:13.902 00:29:13.902 Run status group 0 (all jobs): 00:29:13.902 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2005-2005msec 00:29:13.902 WRITE: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=158MiB (166MB), run=1758-1758msec 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.902 rmmod nvme_tcp 00:29:13.902 rmmod nvme_fabrics 00:29:13.902 rmmod nvme_keyring 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 477258 ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 477258 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 477258 ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 477258 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477258 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477258' 00:29:13.902 killing process with pid 477258 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 477258 00:29:13.902 17:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 477258 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.163 17:45:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.708 00:29:16.708 real 0m18.219s 00:29:16.708 user 1m4.149s 00:29:16.708 sys 0m7.969s 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.708 ************************************ 00:29:16.708 END TEST nvmf_fio_host 00:29:16.708 ************************************ 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.708 ************************************ 00:29:16.708 START TEST nvmf_failover 00:29:16.708 ************************************ 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:16.708 * Looking for test storage... 00:29:16.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.708 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:16.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.709 --rc genhtml_branch_coverage=1 00:29:16.709 --rc genhtml_function_coverage=1 00:29:16.709 --rc genhtml_legend=1 00:29:16.709 --rc geninfo_all_blocks=1 00:29:16.709 --rc geninfo_unexecuted_blocks=1 00:29:16.709 00:29:16.709 ' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:16.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.709 --rc genhtml_branch_coverage=1 00:29:16.709 --rc genhtml_function_coverage=1 00:29:16.709 --rc genhtml_legend=1 00:29:16.709 --rc geninfo_all_blocks=1 00:29:16.709 --rc geninfo_unexecuted_blocks=1 00:29:16.709 00:29:16.709 ' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:16.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.709 --rc genhtml_branch_coverage=1 00:29:16.709 --rc genhtml_function_coverage=1 00:29:16.709 --rc genhtml_legend=1 00:29:16.709 --rc geninfo_all_blocks=1 00:29:16.709 --rc geninfo_unexecuted_blocks=1 00:29:16.709 00:29:16.709 ' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:16.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.709 --rc genhtml_branch_coverage=1 00:29:16.709 --rc genhtml_function_coverage=1 00:29:16.709 --rc genhtml_legend=1 00:29:16.709 --rc geninfo_all_blocks=1 00:29:16.709 --rc geninfo_unexecuted_blocks=1 00:29:16.709 00:29:16.709 ' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.709 17:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:24.852 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:24.852 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:24.852 Found net devices under 0000:31:00.0: cvl_0_0 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.852 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:24.853 Found net devices under 0000:31:00.1: cvl_0_1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.853 17:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:29:24.853 00:29:24.853 --- 10.0.0.2 ping statistics --- 00:29:24.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.853 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:29:24.853 00:29:24.853 --- 10.0.0.1 ping statistics --- 00:29:24.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.853 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=483902 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 483902 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 483902 ']' 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.853 17:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:24.853 [2024-10-08 17:45:16.236181] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:29:24.853 [2024-10-08 17:45:16.236244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.853 [2024-10-08 17:45:16.314107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.853 [2024-10-08 17:45:16.407567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.853 [2024-10-08 17:45:16.407623] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.853 [2024-10-08 17:45:16.407632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.853 [2024-10-08 17:45:16.407638] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.853 [2024-10-08 17:45:16.407645] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.853 [2024-10-08 17:45:16.409050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.853 [2024-10-08 17:45:16.409280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.853 [2024-10-08 17:45:16.409393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.115 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:25.376 [2024-10-08 17:45:17.268918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.376 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:25.637 Malloc0 00:29:25.637 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.898 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.159 17:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.159 [2024-10-08 17:45:18.075356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.159 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:26.420 [2024-10-08 17:45:18.267879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:26.420 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:26.681 [2024-10-08 17:45:18.468545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=484422 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 484422 /var/tmp/bdevperf.sock 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 484422 ']' 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.681 17:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:27.622 17:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.622 17:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:27.622 17:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:27.883 NVMe0n1 00:29:27.883 17:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:28.143 00:29:28.143 17:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=484637 00:29:28.143 17:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:28.143 17:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:29.085 17:45:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.346 [2024-10-08 17:45:21.188043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.346 [2024-10-08 17:45:21.188206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 [2024-10-08 17:45:21.188393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a410 is same with the state(6) to be set 00:29:29.347 17:45:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:32.648 17:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:32.648 00:29:32.648 17:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:32.909 [2024-10-08 17:45:24.681478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 [2024-10-08 17:45:24.681572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b1c0 is same with the state(6) to be set 00:29:32.909 17:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:36.205 17:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.205 [2024-10-08 17:45:27.869233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.205 17:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:37.151 17:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:37.151 [2024-10-08 17:45:29.046716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.046998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.047002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.047007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.047011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.047016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 [2024-10-08 17:45:29.047020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c130 is same with the state(6) to be set 00:29:37.151 17:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 484637 00:29:43.744 { 00:29:43.744 "results": [ 00:29:43.744 { 00:29:43.744 "job": "NVMe0n1", 00:29:43.744 "core_mask": "0x1", 00:29:43.744 "workload": "verify", 00:29:43.744 "status": "finished", 00:29:43.744 "verify_range": { 00:29:43.744 "start": 0, 00:29:43.744 "length": 16384 00:29:43.744 }, 00:29:43.744 "queue_depth": 128, 00:29:43.744 "io_size": 4096, 00:29:43.744 "runtime": 15.006855, 00:29:43.744 "iops": 12483.62831519329, 00:29:43.744 "mibps": 48.76417310622379, 00:29:43.744 "io_failed": 6901, 00:29:43.744 "io_timeout": 0, 00:29:43.744 "avg_latency_us": 9868.232872771456, 00:29:43.744 "min_latency_us": 535.8933333333333, 00:29:43.744 "max_latency_us": 18459.306666666667 00:29:43.744 } 00:29:43.744 ], 00:29:43.744 "core_count": 1 00:29:43.744 } 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 484422 ']' 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484422' 00:29:43.744 killing process with pid 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 484422 00:29:43.744 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:43.744 [2024-10-08 17:45:18.565143] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:29:43.744 [2024-10-08 17:45:18.565233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484422 ] 00:29:43.744 [2024-10-08 17:45:18.650391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.744 [2024-10-08 17:45:18.747739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.744 Running I/O for 15 seconds... 00:29:43.744 11071.00 IOPS, 43.25 MiB/s [2024-10-08T15:45:35.736Z] [2024-10-08 17:45:21.188799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.188991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.188998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.744 [2024-10-08 17:45:21.189109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.744 [2024-10-08 17:45:21.189255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.744 [2024-10-08 17:45:21.189262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.745 [2024-10-08 17:45:21.189957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.745 [2024-10-08 17:45:21.189967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.189978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.189988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.746 [2024-10-08 17:45:21.190688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.746 [2024-10-08 17:45:21.190698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.190988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.190995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:21.191012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.747 [2024-10-08 17:45:21.191044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.747 [2024-10-08 17:45:21.191051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:29:43.747 [2024-10-08 17:45:21.191059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191098] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1144670 was disconnected and freed. reset controller. 00:29:43.747 [2024-10-08 17:45:21.191111] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:43.747 [2024-10-08 17:45:21.191131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.747 [2024-10-08 17:45:21.191139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.747 [2024-10-08 17:45:21.191156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.747 [2024-10-08 17:45:21.191171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.747 [2024-10-08 17:45:21.191186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:21.191194] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.747 [2024-10-08 17:45:21.194740] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.747 [2024-10-08 17:45:21.194764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123e40 (9): Bad file descriptor 00:29:43.747 [2024-10-08 17:45:21.232936] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:43.747 11129.00 IOPS, 43.47 MiB/s [2024-10-08T15:45:35.739Z] 11228.33 IOPS, 43.86 MiB/s [2024-10-08T15:45:35.739Z] 11606.75 IOPS, 45.34 MiB/s [2024-10-08T15:45:35.739Z] [2024-10-08 17:45:24.683480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.747 [2024-10-08 17:45:24.683683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.747 [2024-10-08 17:45:24.683692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.683997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.748 [2024-10-08 17:45:24.684104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.748 [2024-10-08 17:45:24.684109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.749 [2024-10-08 17:45:24.684613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.749 [2024-10-08 17:45:24.684620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.750 [2024-10-08 17:45:24.684625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.750 [2024-10-08 17:45:24.684638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.750 [2024-10-08 17:45:24.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.750 [2024-10-08 17:45:24.684662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.750 [2024-10-08 17:45:24.684674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47088 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47096 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47104 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47112 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47120 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47128 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47136 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47144 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47152 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47160 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47168 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47176 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47184 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47192 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47200 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.684981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.684986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.684991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.684995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47208 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47216 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47224 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47232 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47240 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47248 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.750 [2024-10-08 17:45:24.685110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.750 [2024-10-08 17:45:24.685114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47256 len:8 PRP1 0x0 PRP2 0x0 00:29:43.750 [2024-10-08 17:45:24.685119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.750 [2024-10-08 17:45:24.685125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47264 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47272 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47280 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47288 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47296 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47304 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.685234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.685239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.685243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.685247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47312 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.695979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.696014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.696020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.696026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.696035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.696039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47328 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.696044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.751 [2024-10-08 17:45:24.696054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.751 [2024-10-08 17:45:24.696058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47336 len:8 PRP1 0x0 PRP2 0x0 00:29:43.751 [2024-10-08 17:45:24.696063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696099] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1146480 was disconnected and freed. reset controller. 00:29:43.751 [2024-10-08 17:45:24.696107] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:43.751 [2024-10-08 17:45:24.696130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.751 [2024-10-08 17:45:24.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.751 [2024-10-08 17:45:24.696149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.751 [2024-10-08 17:45:24.696160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.751 [2024-10-08 17:45:24.696171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:24.696180] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.751 [2024-10-08 17:45:24.696215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123e40 (9): Bad file descriptor 00:29:43.751 [2024-10-08 17:45:24.698617] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.751 [2024-10-08 17:45:24.729452] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:43.751 11774.40 IOPS, 45.99 MiB/s [2024-10-08T15:45:35.743Z] 11991.83 IOPS, 46.84 MiB/s [2024-10-08T15:45:35.743Z] 12137.86 IOPS, 47.41 MiB/s [2024-10-08T15:45:35.743Z] 12228.50 IOPS, 47.77 MiB/s [2024-10-08T15:45:35.743Z] [2024-10-08 17:45:29.047406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.751 [2024-10-08 17:45:29.047634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.751 [2024-10-08 17:45:29.047641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.047989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.047994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.752 [2024-10-08 17:45:29.048073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.752 [2024-10-08 17:45:29.048078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.753 [2024-10-08 17:45:29.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.753 [2024-10-08 17:45:29.048572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.754 [2024-10-08 17:45:29.048934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.048951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.754 [2024-10-08 17:45:29.048956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.754 [2024-10-08 17:45:29.048960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:29:43.754 [2024-10-08 17:45:29.048965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.049004] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11534c0 was disconnected and freed. reset controller. 00:29:43.754 [2024-10-08 17:45:29.049011] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:43.754 [2024-10-08 17:45:29.049028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.754 [2024-10-08 17:45:29.049033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.049040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.754 [2024-10-08 17:45:29.049045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.049050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.754 [2024-10-08 17:45:29.049055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.049061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.754 [2024-10-08 17:45:29.049066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.754 [2024-10-08 17:45:29.049071] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.754 [2024-10-08 17:45:29.049089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123e40 (9): Bad file descriptor 00:29:43.754 [2024-10-08 17:45:29.051494] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.754 [2024-10-08 17:45:29.130180] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:43.754 12182.00 IOPS, 47.59 MiB/s [2024-10-08T15:45:35.746Z] 12253.60 IOPS, 47.87 MiB/s [2024-10-08T15:45:35.746Z] 12318.18 IOPS, 48.12 MiB/s [2024-10-08T15:45:35.746Z] 12382.42 IOPS, 48.37 MiB/s [2024-10-08T15:45:35.746Z] 12418.62 IOPS, 48.51 MiB/s [2024-10-08T15:45:35.746Z] 12456.07 IOPS, 48.66 MiB/s [2024-10-08T15:45:35.746Z] 12481.80 IOPS, 48.76 MiB/s 00:29:43.754 Latency(us) 00:29:43.754 [2024-10-08T15:45:35.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.754 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:43.754 Verification LBA range: start 0x0 length 0x4000 00:29:43.755 NVMe0n1 : 15.01 12483.63 48.76 459.86 0.00 9868.23 535.89 18459.31 00:29:43.755 [2024-10-08T15:45:35.747Z] =================================================================================================================== 00:29:43.755 [2024-10-08T15:45:35.747Z] Total : 12483.63 48.76 459.86 0.00 9868.23 535.89 18459.31 00:29:43.755 Received shutdown signal, test time was about 15.000000 seconds 00:29:43.755 00:29:43.755 Latency(us) 00:29:43.755 [2024-10-08T15:45:35.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.755 [2024-10-08T15:45:35.747Z] =================================================================================================================== 00:29:43.755 [2024-10-08T15:45:35.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=487618 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 487618 /var/tmp/bdevperf.sock 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 487618 ']' 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.755 17:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.328 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.328 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:44.328 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:44.588 [2024-10-08 17:45:36.380919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:44.588 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:44.588 [2024-10-08 17:45:36.565323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:44.848 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:44.848 NVMe0n1 00:29:45.110 17:45:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:45.371 00:29:45.371 17:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:45.631 00:29:45.631 17:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:45.631 17:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:45.892 17:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:45.892 17:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:49.193 17:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:49.193 17:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:49.193 17:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=488792 00:29:49.193 17:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.193 17:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 488792 00:29:50.578 { 00:29:50.578 "results": [ 00:29:50.578 { 00:29:50.578 "job": "NVMe0n1", 00:29:50.578 "core_mask": "0x1", 00:29:50.578 "workload": "verify", 00:29:50.578 "status": "finished", 00:29:50.578 "verify_range": { 00:29:50.578 "start": 0, 00:29:50.578 "length": 16384 00:29:50.578 }, 00:29:50.578 "queue_depth": 128, 00:29:50.578 "io_size": 4096, 00:29:50.578 "runtime": 1.007054, 00:29:50.578 "iops": 12843.402637793008, 00:29:50.578 "mibps": 50.169541553878936, 00:29:50.578 "io_failed": 0, 00:29:50.578 "io_timeout": 0, 00:29:50.578 "avg_latency_us": 9929.586441935982, 00:29:50.578 "min_latency_us": 894.2933333333333, 00:29:50.578 "max_latency_us": 8574.293333333333 00:29:50.578 } 00:29:50.578 ], 00:29:50.578 "core_count": 1 00:29:50.578 } 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:50.578 [2024-10-08 17:45:35.424532] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:29:50.578 [2024-10-08 17:45:35.424592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487618 ] 00:29:50.578 [2024-10-08 17:45:35.502201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.578 [2024-10-08 17:45:35.554477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.578 [2024-10-08 17:45:37.820161] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:50.578 [2024-10-08 17:45:37.820200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.578 [2024-10-08 17:45:37.820209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.578 [2024-10-08 17:45:37.820217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.578 [2024-10-08 17:45:37.820222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.578 [2024-10-08 17:45:37.820228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.578 [2024-10-08 17:45:37.820233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.578 [2024-10-08 17:45:37.820239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.578 [2024-10-08 17:45:37.820244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.578 [2024-10-08 17:45:37.820249] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.578 [2024-10-08 17:45:37.820271] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.578 [2024-10-08 17:45:37.820283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201be40 (9): Bad file descriptor 00:29:50.578 [2024-10-08 17:45:37.832122] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:50.578 Running I/O for 1 seconds... 00:29:50.578 12798.00 IOPS, 49.99 MiB/s 00:29:50.578 Latency(us) 00:29:50.578 [2024-10-08T15:45:42.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.578 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:50.578 Verification LBA range: start 0x0 length 0x4000 00:29:50.578 NVMe0n1 : 1.01 12843.40 50.17 0.00 0.00 9929.59 894.29 8574.29 00:29:50.578 [2024-10-08T15:45:42.570Z] =================================================================================================================== 00:29:50.578 [2024-10-08T15:45:42.570Z] Total : 12843.40 50.17 0.00 0.00 9929.59 894.29 8574.29 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:50.578 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:50.839 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:51.101 17:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:54.399 17:45:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:54.399 17:45:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 487618 ']' 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487618' 00:29:54.399 killing process with pid 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 487618 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:54.399 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.660 rmmod nvme_tcp 00:29:54.660 rmmod nvme_fabrics 00:29:54.660 rmmod nvme_keyring 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 483902 ']' 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 483902 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 483902 ']' 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 483902 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483902 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483902' 00:29:54.660 killing process with pid 483902 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 483902 00:29:54.660 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 483902 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.921 17:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.831 17:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.831 00:29:56.831 real 0m40.612s 00:29:56.831 user 2m3.857s 00:29:56.831 sys 0m8.997s 00:29:56.831 17:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.831 17:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:56.831 ************************************ 00:29:56.831 END TEST nvmf_failover 00:29:56.831 ************************************ 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.093 ************************************ 00:29:57.093 START TEST nvmf_host_discovery 00:29:57.093 ************************************ 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:57.093 * Looking for test storage... 00:29:57.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:57.093 17:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:57.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.093 --rc genhtml_branch_coverage=1 00:29:57.093 --rc genhtml_function_coverage=1 00:29:57.093 --rc genhtml_legend=1 00:29:57.093 --rc geninfo_all_blocks=1 00:29:57.093 --rc geninfo_unexecuted_blocks=1 00:29:57.093 00:29:57.093 ' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:57.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.093 --rc genhtml_branch_coverage=1 00:29:57.093 --rc genhtml_function_coverage=1 00:29:57.093 --rc genhtml_legend=1 00:29:57.093 --rc geninfo_all_blocks=1 00:29:57.093 --rc geninfo_unexecuted_blocks=1 00:29:57.093 00:29:57.093 ' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:57.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.093 --rc genhtml_branch_coverage=1 00:29:57.093 --rc genhtml_function_coverage=1 00:29:57.093 --rc genhtml_legend=1 00:29:57.093 --rc geninfo_all_blocks=1 00:29:57.093 --rc geninfo_unexecuted_blocks=1 00:29:57.093 00:29:57.093 ' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:57.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.093 --rc genhtml_branch_coverage=1 00:29:57.093 --rc genhtml_function_coverage=1 00:29:57.093 --rc genhtml_legend=1 00:29:57.093 --rc geninfo_all_blocks=1 00:29:57.093 --rc geninfo_unexecuted_blocks=1 00:29:57.093 00:29:57.093 ' 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.093 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.355 17:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.498 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:05.499 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:05.499 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:05.499 Found net devices under 0000:31:00.0: cvl_0_0 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:05.499 Found net devices under 0000:31:00.1: cvl_0_1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:30:05.499 00:30:05.499 --- 10.0.0.2 ping statistics --- 00:30:05.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.499 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:30:05.499 00:30:05.499 --- 10.0.0.1 ping statistics --- 00:30:05.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.499 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=494076 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 494076 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 494076 ']' 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.499 17:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.499 [2024-10-08 17:45:56.716721] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:30:05.499 [2024-10-08 17:45:56.716788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.499 [2024-10-08 17:45:56.806646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.499 [2024-10-08 17:45:56.899635] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.500 [2024-10-08 17:45:56.899696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.500 [2024-10-08 17:45:56.899704] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.500 [2024-10-08 17:45:56.899711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.500 [2024-10-08 17:45:56.899718] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.500 [2024-10-08 17:45:56.900531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.760 [2024-10-08 17:45:57.578506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.760 [2024-10-08 17:45:57.590740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.760 null0 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.760 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.761 null1 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=494391 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 494391 /tmp/host.sock 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 494391 ']' 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:05.761 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.761 17:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.761 [2024-10-08 17:45:57.687662] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:30:05.761 [2024-10-08 17:45:57.687722] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494391 ] 00:30:06.021 [2024-10-08 17:45:57.769498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.022 [2024-10-08 17:45:57.864979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:06.592 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:06.853 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 [2024-10-08 17:45:58.854024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 17:45:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:07.115 17:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:07.686 [2024-10-08 17:45:59.579039] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:07.686 [2024-10-08 17:45:59.579071] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:07.686 [2024-10-08 17:45:59.579086] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:07.686 [2024-10-08 17:45:59.665354] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:07.946 [2024-10-08 17:45:59.851943] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:07.946 [2024-10-08 17:45:59.851967] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.207 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.468 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.469 [2024-10-08 17:46:00.389927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.469 [2024-10-08 17:46:00.390925] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:08.469 [2024-10-08 17:46:00.390951] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.469 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.730 [2024-10-08 17:46:00.478213] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:08.730 17:46:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:08.990 [2024-10-08 17:46:00.787843] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:08.990 [2024-10-08 17:46:00.787861] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:08.990 [2024-10-08 17:46:00.787867] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.936 [2024-10-08 17:46:01.666129] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:09.936 [2024-10-08 17:46:01.666153] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:09.936 [2024-10-08 17:46:01.674005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.936 [2024-10-08 17:46:01.674024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.936 [2024-10-08 17:46:01.674034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.936 [2024-10-08 17:46:01.674041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:09.936 [2024-10-08 17:46:01.674050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.936 [2024-10-08 17:46:01.674066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.936 [2024-10-08 17:46:01.674074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.936 [2024-10-08 17:46:01.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.936 [2024-10-08 17:46:01.674089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:09.936 [2024-10-08 17:46:01.684017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.936 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.936 [2024-10-08 17:46:01.694056] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.936 [2024-10-08 17:46:01.694391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-10-08 17:46:01.694406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.936 [2024-10-08 17:46:01.694415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.936 [2024-10-08 17:46:01.694427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.936 [2024-10-08 17:46:01.694445] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.936 [2024-10-08 17:46:01.694452] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.936 [2024-10-08 17:46:01.694461] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.936 [2024-10-08 17:46:01.694473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.936 [2024-10-08 17:46:01.704111] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.936 [2024-10-08 17:46:01.704408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-10-08 17:46:01.704420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.936 [2024-10-08 17:46:01.704428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.936 [2024-10-08 17:46:01.704443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.937 [2024-10-08 17:46:01.704459] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.937 [2024-10-08 17:46:01.704466] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.937 [2024-10-08 17:46:01.704473] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.937 [2024-10-08 17:46:01.704490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.937 [2024-10-08 17:46:01.714164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.937 [2024-10-08 17:46:01.714360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-10-08 17:46:01.714373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.937 [2024-10-08 17:46:01.714380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.937 [2024-10-08 17:46:01.714392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.937 [2024-10-08 17:46:01.714402] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.937 [2024-10-08 17:46:01.714409] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.937 [2024-10-08 17:46:01.714416] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.937 [2024-10-08 17:46:01.714427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:09.937 [2024-10-08 17:46:01.724218] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:09.937 [2024-10-08 17:46:01.724516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-10-08 17:46:01.724529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.937 [2024-10-08 17:46:01.724536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.937 [2024-10-08 17:46:01.724547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.937 [2024-10-08 17:46:01.724571] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.937 [2024-10-08 17:46:01.724579] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.937 [2024-10-08 17:46:01.724586] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.937 [2024-10-08 17:46:01.724597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:09.937 [2024-10-08 17:46:01.734271] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.937 [2024-10-08 17:46:01.734567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-10-08 17:46:01.734580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.937 [2024-10-08 17:46:01.734587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.937 [2024-10-08 17:46:01.734598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.937 [2024-10-08 17:46:01.734615] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.937 [2024-10-08 17:46:01.734622] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.937 [2024-10-08 17:46:01.734629] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.937 [2024-10-08 17:46:01.734640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.937 [2024-10-08 17:46:01.744326] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:09.937 [2024-10-08 17:46:01.744625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-10-08 17:46:01.744637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eae50 with addr=10.0.0.2, port=4420 00:30:09.937 [2024-10-08 17:46:01.744644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eae50 is same with the state(6) to be set 00:30:09.937 [2024-10-08 17:46:01.744655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae50 (9): Bad file descriptor 00:30:09.937 [2024-10-08 17:46:01.744677] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.937 [2024-10-08 17:46:01.744685] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:09.937 [2024-10-08 17:46:01.744692] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.937 [2024-10-08 17:46:01.744702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.937 [2024-10-08 17:46:01.752705] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:09.937 [2024-10-08 17:46:01.752723] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:09.937 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.938 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:09.938 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:09.938 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.199 17:46:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.199 17:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.139 [2024-10-08 17:46:03.098137] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:11.139 [2024-10-08 17:46:03.098150] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:11.139 [2024-10-08 17:46:03.098159] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:11.400 [2024-10-08 17:46:03.186415] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:11.400 [2024-10-08 17:46:03.250816] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:11.400 [2024-10-08 17:46:03.250839] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.400 request: 00:30:11.400 { 00:30:11.400 "name": "nvme", 00:30:11.400 "trtype": "tcp", 00:30:11.400 "traddr": "10.0.0.2", 00:30:11.400 "adrfam": "ipv4", 00:30:11.400 "trsvcid": "8009", 00:30:11.400 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:11.400 "wait_for_attach": true, 00:30:11.400 "method": "bdev_nvme_start_discovery", 00:30:11.400 "req_id": 1 00:30:11.400 } 00:30:11.400 Got JSON-RPC error response 00:30:11.400 response: 00:30:11.400 { 00:30:11.400 "code": -17, 00:30:11.400 "message": "File exists" 00:30:11.400 } 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:11.400 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.401 request: 00:30:11.401 { 00:30:11.401 "name": "nvme_second", 00:30:11.401 "trtype": "tcp", 00:30:11.401 "traddr": "10.0.0.2", 00:30:11.401 "adrfam": "ipv4", 00:30:11.401 "trsvcid": "8009", 00:30:11.401 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:11.401 "wait_for_attach": true, 00:30:11.401 "method": "bdev_nvme_start_discovery", 00:30:11.401 "req_id": 1 00:30:11.401 } 00:30:11.401 Got JSON-RPC error response 00:30:11.401 response: 00:30:11.401 { 00:30:11.401 "code": -17, 00:30:11.401 "message": "File exists" 00:30:11.401 } 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:11.401 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.661 17:46:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.603 [2024-10-08 17:46:04.510218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.603 [2024-10-08 17:46:04.510241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2248200 with addr=10.0.0.2, port=8010 00:30:12.603 [2024-10-08 17:46:04.510251] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:12.603 [2024-10-08 17:46:04.510257] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:12.603 [2024-10-08 17:46:04.510262] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:13.543 [2024-10-08 17:46:05.512554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.543 [2024-10-08 17:46:05.512572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2248200 with addr=10.0.0.2, port=8010 00:30:13.543 [2024-10-08 17:46:05.512583] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:13.543 [2024-10-08 17:46:05.512588] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:13.543 [2024-10-08 17:46:05.512593] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:14.927 [2024-10-08 17:46:06.514602] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:14.927 request: 00:30:14.927 { 00:30:14.927 "name": "nvme_second", 00:30:14.927 "trtype": "tcp", 00:30:14.927 "traddr": "10.0.0.2", 00:30:14.927 "adrfam": "ipv4", 00:30:14.927 "trsvcid": "8010", 00:30:14.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:14.927 "wait_for_attach": false, 00:30:14.927 "attach_timeout_ms": 3000, 00:30:14.927 "method": "bdev_nvme_start_discovery", 00:30:14.927 "req_id": 1 00:30:14.927 } 00:30:14.927 Got JSON-RPC error response 00:30:14.927 response: 00:30:14.927 { 00:30:14.927 "code": -110, 00:30:14.927 "message": "Connection timed out" 00:30:14.927 } 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 494391 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.927 rmmod nvme_tcp 00:30:14.927 rmmod nvme_fabrics 00:30:14.927 rmmod nvme_keyring 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 494076 ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 494076 ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 494076' 00:30:14.927 killing process with pid 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 494076 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.927 17:46:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.477 00:30:17.477 real 0m20.034s 00:30:17.477 user 0m22.954s 00:30:17.477 sys 0m7.185s 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.477 ************************************ 00:30:17.477 END TEST nvmf_host_discovery 00:30:17.477 ************************************ 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.477 ************************************ 00:30:17.477 START TEST nvmf_host_multipath_status 00:30:17.477 ************************************ 00:30:17.477 17:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:17.477 * Looking for test storage... 00:30:17.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.477 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:17.477 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:30:17.477 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:17.477 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:17.477 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:17.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.478 --rc genhtml_branch_coverage=1 00:30:17.478 --rc genhtml_function_coverage=1 00:30:17.478 --rc genhtml_legend=1 00:30:17.478 --rc geninfo_all_blocks=1 00:30:17.478 --rc geninfo_unexecuted_blocks=1 00:30:17.478 00:30:17.478 ' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:17.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.478 --rc genhtml_branch_coverage=1 00:30:17.478 --rc genhtml_function_coverage=1 00:30:17.478 --rc genhtml_legend=1 00:30:17.478 --rc geninfo_all_blocks=1 00:30:17.478 --rc geninfo_unexecuted_blocks=1 00:30:17.478 00:30:17.478 ' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:17.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.478 --rc genhtml_branch_coverage=1 00:30:17.478 --rc genhtml_function_coverage=1 00:30:17.478 --rc genhtml_legend=1 00:30:17.478 --rc geninfo_all_blocks=1 00:30:17.478 --rc geninfo_unexecuted_blocks=1 00:30:17.478 00:30:17.478 ' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:17.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.478 --rc genhtml_branch_coverage=1 00:30:17.478 --rc genhtml_function_coverage=1 00:30:17.478 --rc genhtml_legend=1 00:30:17.478 --rc geninfo_all_blocks=1 00:30:17.478 --rc geninfo_unexecuted_blocks=1 00:30:17.478 00:30:17.478 ' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.478 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.479 17:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:25.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:25.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:25.627 Found net devices under 0000:31:00.0: cvl_0_0 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:25.627 Found net devices under 0000:31:00.1: cvl_0_1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.627 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:30:25.628 00:30:25.628 --- 10.0.0.2 ping statistics --- 00:30:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.628 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:30:25.628 00:30:25.628 --- 10.0.0.1 ping statistics --- 00:30:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.628 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=500595 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 500595 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 500595 ']' 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.628 17:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.628 [2024-10-08 17:46:16.927914] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:30:25.628 [2024-10-08 17:46:16.927992] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.628 [2024-10-08 17:46:17.019013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:25.628 [2024-10-08 17:46:17.113137] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.628 [2024-10-08 17:46:17.113199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.628 [2024-10-08 17:46:17.113207] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.628 [2024-10-08 17:46:17.113214] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.628 [2024-10-08 17:46:17.113220] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.628 [2024-10-08 17:46:17.114485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.628 [2024-10-08 17:46:17.114489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=500595 00:30:25.889 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:26.151 [2024-10-08 17:46:17.953632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.151 17:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:26.412 Malloc0 00:30:26.412 17:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:26.674 17:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.674 17:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.934 [2024-10-08 17:46:18.783077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.934 17:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:27.195 [2024-10-08 17:46:18.975607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=501000 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 501000 /var/tmp/bdevperf.sock 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 501000 ']' 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.195 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.196 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:28.135 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.135 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:28.135 17:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:28.135 17:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:28.395 Nvme0n1 00:30:28.395 17:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:28.966 Nvme0n1 00:30:28.966 17:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:28.966 17:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:30.879 17:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:30.879 17:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:31.140 17:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:31.401 17:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:32.343 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:32.343 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:32.343 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.343 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:32.603 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.603 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:32.603 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.603 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:32.603 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:32.604 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:32.604 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.604 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:32.864 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:32.864 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:32.864 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.864 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:33.124 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.124 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:33.124 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.124 17:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:33.124 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.124 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:33.124 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.124 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:33.384 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.384 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:33.384 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:33.644 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:33.644 17:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:35.032 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:35.032 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:35.032 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.032 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:35.032 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.033 17:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:35.293 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.293 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:35.293 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.293 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:35.553 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.553 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:35.553 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.553 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:35.813 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:36.073 17:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:36.333 17:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:37.273 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:37.273 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:37.273 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.273 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.533 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.793 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.793 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.793 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.793 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:38.053 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.053 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:38.053 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.053 17:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:38.313 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:38.573 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:38.573 17:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:39.956 17:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.219 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.219 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.219 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.219 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.479 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.740 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:40.740 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:40.740 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:41.000 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:41.000 17:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:42.382 17:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:42.382 17:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.382 17:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.382 17:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.382 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:42.642 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.642 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:42.642 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.642 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:42.902 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.902 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:42.902 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.902 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.162 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.162 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:43.162 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.162 17:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.162 17:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.162 17:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:43.162 17:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:43.423 17:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:43.683 17:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:44.623 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:44.623 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:44.623 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.623 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.883 17:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:45.144 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.144 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:45.144 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.144 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.404 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:45.665 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.665 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:45.925 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:45.925 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:45.925 17:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:46.186 17:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:47.127 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:47.127 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:47.127 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.127 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:47.387 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.387 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:47.387 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.387 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.647 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.647 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.647 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.647 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:47.908 17:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.168 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.169 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:48.169 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.169 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.430 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.430 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:48.430 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:48.430 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:48.690 17:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:49.632 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:49.632 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:49.632 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.632 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.893 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:49.893 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:49.893 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.893 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.154 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.154 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.154 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.154 17:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.154 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.154 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.154 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.154 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.415 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.415 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:50.415 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.416 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:50.676 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.676 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:50.676 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.676 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:50.938 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.938 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:50.938 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:50.938 17:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:51.199 17:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:52.141 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:52.141 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:52.141 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.141 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.413 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.413 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.413 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.413 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.682 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:52.960 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.960 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:52.960 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.960 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.241 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.241 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.241 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.241 17:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:53.241 17:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.241 17:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:53.241 17:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:53.517 17:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:53.517 17:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.960 17:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:55.244 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.244 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:55.244 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:55.244 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.533 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 501000 ']' 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 501000' 00:30:55.814 killing process with pid 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 501000 00:30:55.814 { 00:30:55.814 "results": [ 00:30:55.814 { 00:30:55.814 "job": "Nvme0n1", 00:30:55.814 "core_mask": "0x4", 00:30:55.814 "workload": "verify", 00:30:55.814 "status": "terminated", 00:30:55.814 "verify_range": { 00:30:55.814 "start": 0, 00:30:55.814 "length": 16384 00:30:55.814 }, 00:30:55.814 "queue_depth": 128, 00:30:55.814 "io_size": 4096, 00:30:55.814 "runtime": 26.743316, 00:30:55.814 "iops": 11807.062370275997, 00:30:55.814 "mibps": 46.12133738389061, 00:30:55.814 "io_failed": 0, 00:30:55.814 "io_timeout": 0, 00:30:55.814 "avg_latency_us": 10820.624436449623, 00:30:55.814 "min_latency_us": 535.8933333333333, 00:30:55.814 "max_latency_us": 3075822.933333333 00:30:55.814 } 00:30:55.814 ], 00:30:55.814 "core_count": 1 00:30:55.814 } 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 501000 00:30:55.814 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:56.117 [2024-10-08 17:46:19.056433] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:30:56.117 [2024-10-08 17:46:19.056507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501000 ] 00:30:56.117 [2024-10-08 17:46:19.139522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.117 [2024-10-08 17:46:19.230030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.117 Running I/O for 90 seconds... 00:30:56.117 10216.00 IOPS, 39.91 MiB/s [2024-10-08T15:46:48.109Z] 10567.00 IOPS, 41.28 MiB/s [2024-10-08T15:46:48.109Z] 10688.00 IOPS, 41.75 MiB/s [2024-10-08T15:46:48.109Z] 11127.75 IOPS, 43.47 MiB/s [2024-10-08T15:46:48.109Z] 11484.80 IOPS, 44.86 MiB/s [2024-10-08T15:46:48.109Z] 11719.83 IOPS, 45.78 MiB/s [2024-10-08T15:46:48.109Z] 11899.43 IOPS, 46.48 MiB/s [2024-10-08T15:46:48.109Z] 12001.38 IOPS, 46.88 MiB/s [2024-10-08T15:46:48.109Z] 12086.22 IOPS, 47.21 MiB/s [2024-10-08T15:46:48.109Z] 12139.80 IOPS, 47.42 MiB/s [2024-10-08T15:46:48.109Z] 12184.45 IOPS, 47.60 MiB/s [2024-10-08T15:46:48.109Z] [2024-10-08 17:46:32.774558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.774853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.117 [2024-10-08 17:46:32.775123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.117 [2024-10-08 17:46:32.775128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.775735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.775741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.776115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.118 [2024-10-08 17:46:32.776126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.118 [2024-10-08 17:46:32.776139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.118 [2024-10-08 17:46:32.776145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.119 [2024-10-08 17:46:32.776384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.119 [2024-10-08 17:46:32.776753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.119 [2024-10-08 17:46:32.776764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.776990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.776995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.120 [2024-10-08 17:46:32.777519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.120 [2024-10-08 17:46:32.777791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.120 [2024-10-08 17:46:32.777801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.777994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.777999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.778087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.778092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.789722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.789728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.790159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.790171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.121 [2024-10-08 17:46:32.790184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.121 [2024-10-08 17:46:32.790190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.122 [2024-10-08 17:46:32.790718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.122 [2024-10-08 17:46:32.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-10-08 17:46:32.790869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.790986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.790992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.791465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.791473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.792157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.792169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.792184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-10-08 17:46:32.792192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.792205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.123 [2024-10-08 17:46:32.792213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.792227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.123 [2024-10-08 17:46:32.792234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.123 [2024-10-08 17:46:32.792248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.123 [2024-10-08 17:46:32.792255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.792979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.792993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.793000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.793013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.793020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.793034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-10-08 17:46:32.793041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.124 [2024-10-08 17:46:32.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.793990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.794018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.794032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.794039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.794053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.125 [2024-10-08 17:46:32.794060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.794074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.794081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.794094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.794101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.125 [2024-10-08 17:46:32.801914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-10-08 17:46:32.801924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.801943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.801993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.126 [2024-10-08 17:46:32.802051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.802882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.802892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.803761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.803780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.126 [2024-10-08 17:46:32.803804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-10-08 17:46:32.803814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.803984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-10-08 17:46:32.804287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.804978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.127 [2024-10-08 17:46:32.805007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.127 [2024-10-08 17:46:32.805025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.805571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.805581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.128 [2024-10-08 17:46:32.806960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-10-08 17:46:32.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-10-08 17:46:32.807443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.807991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.808011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.808021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.808040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.808051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.808070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-10-08 17:46:32.808080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.129 [2024-10-08 17:46:32.808099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.808127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.808156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.808184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.808212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.808241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.808251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-10-08 17:46:32.809585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.130 [2024-10-08 17:46:32.809899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.130 [2024-10-08 17:46:32.809919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.809928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.809947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.809957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.810841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.810851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.131 [2024-10-08 17:46:32.811898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-10-08 17:46:32.811908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.811926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.811936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.811955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.811965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.811990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-10-08 17:46:32.812717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.812992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.132 [2024-10-08 17:46:32.812999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.132 [2024-10-08 17:46:32.813013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.813982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.813990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.133 [2024-10-08 17:46:32.814333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.133 [2024-10-08 17:46:32.814460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.133 [2024-10-08 17:46:32.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.814982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.814990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.134 [2024-10-08 17:46:32.815213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.134 [2024-10-08 17:46:32.815220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.815990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.815997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.135 [2024-10-08 17:46:32.816632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.135 [2024-10-08 17:46:32.816646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.135 [2024-10-08 17:46:32.816653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.816989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.816996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.817985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.817999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.818006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.818020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.818027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.818040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.818048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.818061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.818069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.136 [2024-10-08 17:46:32.818090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.136 [2024-10-08 17:46:32.818103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.137 [2024-10-08 17:46:32.818110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.137 [2024-10-08 17:46:32.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.137 [2024-10-08 17:46:32.818152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.137 [2024-10-08 17:46:32.818175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.137 [2024-10-08 17:46:32.818196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.137 [2024-10-08 17:46:32.818913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.137 [2024-10-08 17:46:32.818927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.818935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.818949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.818956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.818969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.818980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.818993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.819979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.138 [2024-10-08 17:46:32.820180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.138 [2024-10-08 17:46:32.820201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.138 [2024-10-08 17:46:32.820222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.138 [2024-10-08 17:46:32.820243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.138 [2024-10-08 17:46:32.820264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.138 [2024-10-08 17:46:32.820285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.138 [2024-10-08 17:46:32.820298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.139 [2024-10-08 17:46:32.820517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.820849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.820856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.824846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.824852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.825342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.825353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.825366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.825375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.825386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.825392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.139 [2024-10-08 17:46:32.825403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.139 [2024-10-08 17:46:32.825408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.140 [2024-10-08 17:46:32.825701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.825985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.825996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.826002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.826012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.826018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.826028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.826034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.826044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.140 [2024-10-08 17:46:32.826050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.140 [2024-10-08 17:46:32.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.826983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.826989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.141 [2024-10-08 17:46:32.827163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.141 [2024-10-08 17:46:32.827168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.142 [2024-10-08 17:46:32.827184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.142 [2024-10-08 17:46:32.827201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.142 [2024-10-08 17:46:32.827217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.142 [2024-10-08 17:46:32.827476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.142 [2024-10-08 17:46:32.827818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.142 [2024-10-08 17:46:32.827829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.827835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.827845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.827851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.827862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.827867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.143 [2024-10-08 17:46:32.828684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.828798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.143 [2024-10-08 17:46:32.829093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.143 [2024-10-08 17:46:32.829102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.829994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.830000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.830012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.830018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.144 [2024-10-08 17:46:32.830028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.144 [2024-10-08 17:46:32.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.145 [2024-10-08 17:46:32.830553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.145 [2024-10-08 17:46:32.830700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.145 [2024-10-08 17:46:32.830710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.830921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.830926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.146 [2024-10-08 17:46:32.831723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.146 [2024-10-08 17:46:32.831729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.147 [2024-10-08 17:46:32.831745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.831838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.147 [2024-10-08 17:46:32.832494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.147 [2024-10-08 17:46:32.832500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.832510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.832516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.832527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.832532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.836990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.836995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.837013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.837031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.148 [2024-10-08 17:46:32.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.148 [2024-10-08 17:46:32.837224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:56.148 [2024-10-08 17:46:32.837236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.149 [2024-10-08 17:46:32.837329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.149 [2024-10-08 17:46:32.837906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:56.149 [2024-10-08 17:46:32.837919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.837924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.837936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.837942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.837954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.837971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.837981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.837993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.837998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:32.838155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:32.838389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:32.838396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.150 12076.75 IOPS, 47.17 MiB/s [2024-10-08T15:46:48.142Z] 11147.77 IOPS, 43.55 MiB/s [2024-10-08T15:46:48.142Z] 10351.50 IOPS, 40.44 MiB/s [2024-10-08T15:46:48.142Z] 9723.00 IOPS, 37.98 MiB/s [2024-10-08T15:46:48.142Z] 9934.50 IOPS, 38.81 MiB/s [2024-10-08T15:46:48.142Z] 10103.24 IOPS, 39.47 MiB/s [2024-10-08T15:46:48.142Z] 10474.72 IOPS, 40.92 MiB/s [2024-10-08T15:46:48.142Z] 10808.79 IOPS, 42.22 MiB/s [2024-10-08T15:46:48.142Z] 11008.45 IOPS, 43.00 MiB/s [2024-10-08T15:46:48.142Z] 11098.29 IOPS, 43.35 MiB/s [2024-10-08T15:46:48.142Z] 11174.45 IOPS, 43.65 MiB/s [2024-10-08T15:46:48.142Z] 11396.83 IOPS, 44.52 MiB/s [2024-10-08T15:46:48.142Z] 11619.12 IOPS, 45.39 MiB/s [2024-10-08T15:46:48.142Z] [2024-10-08 17:46:45.465127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:45.465316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.150 [2024-10-08 17:46:45.465332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.150 [2024-10-08 17:46:45.465474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:56.150 [2024-10-08 17:46:45.465485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.465490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.465501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.465506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.465521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.465531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.151 [2024-10-08 17:46:45.465536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.465547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.151 [2024-10-08 17:46:45.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.466931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.151 [2024-10-08 17:46:45.466947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.466959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.466965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.466980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.466996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.151 [2024-10-08 17:46:45.467208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.151 [2024-10-08 17:46:45.467607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.467794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.151 [2024-10-08 17:46:45.467800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:56.151 [2024-10-08 17:46:45.468061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.152 [2024-10-08 17:46:45.468070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:56.152 [2024-10-08 17:46:45.468081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.152 [2024-10-08 17:46:45.468087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:56.152 11749.56 IOPS, 45.90 MiB/s [2024-10-08T15:46:48.144Z] 11785.88 IOPS, 46.04 MiB/s [2024-10-08T15:46:48.144Z] Received shutdown signal, test time was about 26.743924 seconds 00:30:56.152 00:30:56.152 Latency(us) 00:30:56.152 [2024-10-08T15:46:48.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.152 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:56.152 Verification LBA range: start 0x0 length 0x4000 00:30:56.152 Nvme0n1 : 26.74 11807.06 46.12 0.00 0.00 10820.62 535.89 3075822.93 00:30:56.152 [2024-10-08T15:46:48.144Z] =================================================================================================================== 00:30:56.152 [2024-10-08T15:46:48.144Z] Total : 11807.06 46.12 0.00 0.00 10820.62 535.89 3075822.93 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.152 17:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.152 rmmod nvme_tcp 00:30:56.152 rmmod nvme_fabrics 00:30:56.152 rmmod nvme_keyring 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 500595 ']' 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 500595 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 500595 ']' 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 500595 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.152 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500595 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500595' 00:30:56.418 killing process with pid 500595 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 500595 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 500595 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:56.418 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.419 17:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.410 00:30:58.410 real 0m41.337s 00:30:58.410 user 1m46.360s 00:30:58.410 sys 0m11.715s 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.410 ************************************ 00:30:58.410 END TEST nvmf_host_multipath_status 00:30:58.410 ************************************ 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.410 17:46:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.701 ************************************ 00:30:58.701 START TEST nvmf_discovery_remove_ifc 00:30:58.701 ************************************ 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:58.701 * Looking for test storage... 00:30:58.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.701 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.702 --rc genhtml_branch_coverage=1 00:30:58.702 --rc genhtml_function_coverage=1 00:30:58.702 --rc genhtml_legend=1 00:30:58.702 --rc geninfo_all_blocks=1 00:30:58.702 --rc geninfo_unexecuted_blocks=1 00:30:58.702 00:30:58.702 ' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.702 --rc genhtml_branch_coverage=1 00:30:58.702 --rc genhtml_function_coverage=1 00:30:58.702 --rc genhtml_legend=1 00:30:58.702 --rc geninfo_all_blocks=1 00:30:58.702 --rc geninfo_unexecuted_blocks=1 00:30:58.702 00:30:58.702 ' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.702 --rc genhtml_branch_coverage=1 00:30:58.702 --rc genhtml_function_coverage=1 00:30:58.702 --rc genhtml_legend=1 00:30:58.702 --rc geninfo_all_blocks=1 00:30:58.702 --rc geninfo_unexecuted_blocks=1 00:30:58.702 00:30:58.702 ' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:58.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.702 --rc genhtml_branch_coverage=1 00:30:58.702 --rc genhtml_function_coverage=1 00:30:58.702 --rc genhtml_legend=1 00:30:58.702 --rc geninfo_all_blocks=1 00:30:58.702 --rc geninfo_unexecuted_blocks=1 00:30:58.702 00:30:58.702 ' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.702 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:58.703 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:58.703 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.703 17:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.157 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:07.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:07.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:07.158 Found net devices under 0000:31:00.0: cvl_0_0 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:07.158 Found net devices under 0000:31:00.1: cvl_0_1 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.158 17:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:31:07.158 00:31:07.158 --- 10.0.0.2 ping statistics --- 00:31:07.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.158 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:31:07.158 00:31:07.158 --- 10.0.0.1 ping statistics --- 00:31:07.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.158 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=510991 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 510991 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 510991 ']' 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.158 17:46:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 [2024-10-08 17:46:58.337407] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:31:07.159 [2024-10-08 17:46:58.337473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.159 [2024-10-08 17:46:58.426602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.159 [2024-10-08 17:46:58.520918] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.159 [2024-10-08 17:46:58.520968] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.159 [2024-10-08 17:46:58.520987] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.159 [2024-10-08 17:46:58.520994] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.159 [2024-10-08 17:46:58.521000] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.159 [2024-10-08 17:46:58.521823] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.490 [2024-10-08 17:46:59.207347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.490 [2024-10-08 17:46:59.215599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:07.490 null0 00:31:07.490 [2024-10-08 17:46:59.247562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=511340 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 511340 /tmp/host.sock 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 511340 ']' 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:07.490 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.490 17:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.490 [2024-10-08 17:46:59.323826] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:31:07.490 [2024-10-08 17:46:59.323885] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511340 ] 00:31:07.490 [2024-10-08 17:46:59.405789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.849 [2024-10-08 17:46:59.501758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.455 17:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:09.398 [2024-10-08 17:47:01.326515] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.398 [2024-10-08 17:47:01.326549] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.398 [2024-10-08 17:47:01.326565] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.659 [2024-10-08 17:47:01.454965] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:09.659 [2024-10-08 17:47:01.517914] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:09.659 [2024-10-08 17:47:01.518007] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:09.659 [2024-10-08 17:47:01.518033] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:09.659 [2024-10-08 17:47:01.518051] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.659 [2024-10-08 17:47:01.518076] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.659 [2024-10-08 17:47:01.525319] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc2a2d0 was disconnected and freed. delete nvme_qpair. 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:09.659 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:09.920 17:47:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:10.862 17:47:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.243 17:47:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:13.183 17:47:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.126 17:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.067 [2024-10-08 17:47:06.958405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:15.067 [2024-10-08 17:47:06.958440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.067 [2024-10-08 17:47:06.958449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.067 [2024-10-08 17:47:06.958455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.067 [2024-10-08 17:47:06.958461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.067 [2024-10-08 17:47:06.958467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.067 [2024-10-08 17:47:06.958472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.067 [2024-10-08 17:47:06.958478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.067 [2024-10-08 17:47:06.958483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.067 [2024-10-08 17:47:06.958488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.067 [2024-10-08 17:47:06.958493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.067 [2024-10-08 17:47:06.958498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06d40 is same with the state(6) to be set 00:31:15.067 [2024-10-08 17:47:06.968427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc06d40 (9): Bad file descriptor 00:31:15.067 [2024-10-08 17:47:06.978466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.067 17:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.011 [2024-10-08 17:47:07.991108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:16.011 [2024-10-08 17:47:07.991203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc06d40 with addr=10.0.0.2, port=4420 00:31:16.011 [2024-10-08 17:47:07.991235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc06d40 is same with the state(6) to be set 00:31:16.011 [2024-10-08 17:47:07.991293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc06d40 (9): Bad file descriptor 00:31:16.012 [2024-10-08 17:47:07.992409] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:16.012 [2024-10-08 17:47:07.992477] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:16.012 [2024-10-08 17:47:07.992499] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:16.012 [2024-10-08 17:47:07.992521] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:16.012 [2024-10-08 17:47:07.992587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.012 [2024-10-08 17:47:07.992612] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:16.271 17:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.271 17:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:16.271 17:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:17.219 [2024-10-08 17:47:08.995008] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.219 [2024-10-08 17:47:08.995024] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.219 [2024-10-08 17:47:08.995030] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.219 [2024-10-08 17:47:08.995036] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:17.219 [2024-10-08 17:47:08.995046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.219 [2024-10-08 17:47:08.995060] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:17.219 [2024-10-08 17:47:08.995078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.219 [2024-10-08 17:47:08.995085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.219 [2024-10-08 17:47:08.995093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.219 [2024-10-08 17:47:08.995098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.219 [2024-10-08 17:47:08.995104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.219 [2024-10-08 17:47:08.995109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.219 [2024-10-08 17:47:08.995114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.219 [2024-10-08 17:47:08.995119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.219 [2024-10-08 17:47:08.995125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.219 [2024-10-08 17:47:08.995134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.219 [2024-10-08 17:47:08.995139] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:17.219 [2024-10-08 17:47:08.995576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6480 (9): Bad file descriptor 00:31:17.219 [2024-10-08 17:47:08.996585] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:17.219 [2024-10-08 17:47:08.996594] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:17.219 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:17.219 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.219 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:17.219 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:17.220 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.481 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:17.481 17:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:18.423 17:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.363 [2024-10-08 17:47:11.050922] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:19.363 [2024-10-08 17:47:11.050936] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:19.364 [2024-10-08 17:47:11.050945] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:19.364 [2024-10-08 17:47:11.179330] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:19.364 17:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.624 [2024-10-08 17:47:11.402966] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:19.624 [2024-10-08 17:47:11.403004] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:19.624 [2024-10-08 17:47:11.403019] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:19.624 [2024-10-08 17:47:11.403029] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:19.624 [2024-10-08 17:47:11.403035] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:19.624 [2024-10-08 17:47:11.409385] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc11160 was disconnected and freed. delete nvme_qpair. 00:31:20.564 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.564 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.564 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.564 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.564 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 511340 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 511340 ']' 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 511340 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 511340 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 511340' 00:31:20.565 killing process with pid 511340 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 511340 00:31:20.565 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 511340 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.825 rmmod nvme_tcp 00:31:20.825 rmmod nvme_fabrics 00:31:20.825 rmmod nvme_keyring 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 510991 ']' 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 510991 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 510991 ']' 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 510991 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 510991 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 510991' 00:31:20.825 killing process with pid 510991 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 510991 00:31:20.825 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 510991 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.086 17:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.998 17:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.998 00:31:22.998 real 0m24.512s 00:31:22.998 user 0m29.364s 00:31:22.998 sys 0m7.312s 00:31:22.998 17:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.998 17:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.998 ************************************ 00:31:22.999 END TEST nvmf_discovery_remove_ifc 00:31:22.999 ************************************ 00:31:22.999 17:47:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:22.999 17:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:22.999 17:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.999 17:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.260 ************************************ 00:31:23.260 START TEST nvmf_identify_kernel_target 00:31:23.260 ************************************ 00:31:23.260 17:47:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:23.260 * Looking for test storage... 00:31:23.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.260 --rc genhtml_branch_coverage=1 00:31:23.260 --rc genhtml_function_coverage=1 00:31:23.260 --rc genhtml_legend=1 00:31:23.260 --rc geninfo_all_blocks=1 00:31:23.260 --rc geninfo_unexecuted_blocks=1 00:31:23.260 00:31:23.260 ' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.260 --rc genhtml_branch_coverage=1 00:31:23.260 --rc genhtml_function_coverage=1 00:31:23.260 --rc genhtml_legend=1 00:31:23.260 --rc geninfo_all_blocks=1 00:31:23.260 --rc geninfo_unexecuted_blocks=1 00:31:23.260 00:31:23.260 ' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.260 --rc genhtml_branch_coverage=1 00:31:23.260 --rc genhtml_function_coverage=1 00:31:23.260 --rc genhtml_legend=1 00:31:23.260 --rc geninfo_all_blocks=1 00:31:23.260 --rc geninfo_unexecuted_blocks=1 00:31:23.260 00:31:23.260 ' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.260 --rc genhtml_branch_coverage=1 00:31:23.260 --rc genhtml_function_coverage=1 00:31:23.260 --rc genhtml_legend=1 00:31:23.260 --rc geninfo_all_blocks=1 00:31:23.260 --rc geninfo_unexecuted_blocks=1 00:31:23.260 00:31:23.260 ' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.260 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.261 17:47:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.402 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:31.403 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:31.403 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:31.403 Found net devices under 0000:31:00.0: cvl_0_0 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:31.403 Found net devices under 0000:31:00.1: cvl_0_1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:31:31.403 00:31:31.403 --- 10.0.0.2 ping statistics --- 00:31:31.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.403 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:31:31.403 00:31:31.403 --- 10.0.0.1 ping statistics --- 00:31:31.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.403 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:31.403 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:31.404 17:47:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:34.704 Waiting for block devices as requested 00:31:34.704 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:34.704 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:34.704 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:34.965 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:34.965 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:34.965 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:35.226 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:35.226 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:35.226 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:35.485 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:35.485 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:35.746 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:35.746 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:35.746 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:36.007 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:36.007 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:36.007 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:36.267 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:36.536 No valid GPT data, bailing 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:36.536 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:36.537 00:31:36.537 Discovery Log Number of Records 2, Generation counter 2 00:31:36.537 =====Discovery Log Entry 0====== 00:31:36.537 trtype: tcp 00:31:36.537 adrfam: ipv4 00:31:36.537 subtype: current discovery subsystem 00:31:36.537 treq: not specified, sq flow control disable supported 00:31:36.537 portid: 1 00:31:36.537 trsvcid: 4420 00:31:36.537 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:36.537 traddr: 10.0.0.1 00:31:36.537 eflags: none 00:31:36.537 sectype: none 00:31:36.537 =====Discovery Log Entry 1====== 00:31:36.537 trtype: tcp 00:31:36.537 adrfam: ipv4 00:31:36.537 subtype: nvme subsystem 00:31:36.537 treq: not specified, sq flow control disable supported 00:31:36.537 portid: 1 00:31:36.537 trsvcid: 4420 00:31:36.537 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:36.537 traddr: 10.0.0.1 00:31:36.537 eflags: none 00:31:36.537 sectype: none 00:31:36.537 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:36.538 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:36.538 ===================================================== 00:31:36.538 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:36.538 ===================================================== 00:31:36.538 Controller Capabilities/Features 00:31:36.538 ================================ 00:31:36.538 Vendor ID: 0000 00:31:36.538 Subsystem Vendor ID: 0000 00:31:36.538 Serial Number: fc31332ae372635ef24d 00:31:36.538 Model Number: Linux 00:31:36.538 Firmware Version: 6.8.9-20 00:31:36.538 Recommended Arb Burst: 0 00:31:36.538 IEEE OUI Identifier: 00 00 00 00:31:36.538 Multi-path I/O 00:31:36.538 May have multiple subsystem ports: No 00:31:36.538 May have multiple controllers: No 00:31:36.538 Associated with SR-IOV VF: No 00:31:36.538 Max Data Transfer Size: Unlimited 00:31:36.538 Max Number of Namespaces: 0 00:31:36.538 Max Number of I/O Queues: 1024 00:31:36.538 NVMe Specification Version (VS): 1.3 00:31:36.538 NVMe Specification Version (Identify): 1.3 00:31:36.538 Maximum Queue Entries: 1024 00:31:36.538 Contiguous Queues Required: No 00:31:36.538 Arbitration Mechanisms Supported 00:31:36.538 Weighted Round Robin: Not Supported 00:31:36.538 Vendor Specific: Not Supported 00:31:36.538 Reset Timeout: 7500 ms 00:31:36.539 Doorbell Stride: 4 bytes 00:31:36.539 NVM Subsystem Reset: Not Supported 00:31:36.539 Command Sets Supported 00:31:36.539 NVM Command Set: Supported 00:31:36.539 Boot Partition: Not Supported 00:31:36.539 Memory Page Size Minimum: 4096 bytes 00:31:36.539 Memory Page Size Maximum: 4096 bytes 00:31:36.539 Persistent Memory Region: Not Supported 00:31:36.539 Optional Asynchronous Events Supported 00:31:36.539 Namespace Attribute Notices: Not Supported 00:31:36.539 Firmware Activation Notices: Not Supported 00:31:36.539 ANA Change Notices: Not Supported 00:31:36.539 PLE Aggregate Log Change Notices: Not Supported 00:31:36.539 LBA Status Info Alert Notices: Not Supported 00:31:36.539 EGE Aggregate Log Change Notices: Not Supported 00:31:36.539 Normal NVM Subsystem Shutdown event: Not Supported 00:31:36.539 Zone Descriptor Change Notices: Not Supported 00:31:36.539 Discovery Log Change Notices: Supported 00:31:36.539 Controller Attributes 00:31:36.539 128-bit Host Identifier: Not Supported 00:31:36.539 Non-Operational Permissive Mode: Not Supported 00:31:36.539 NVM Sets: Not Supported 00:31:36.539 Read Recovery Levels: Not Supported 00:31:36.539 Endurance Groups: Not Supported 00:31:36.539 Predictable Latency Mode: Not Supported 00:31:36.539 Traffic Based Keep ALive: Not Supported 00:31:36.539 Namespace Granularity: Not Supported 00:31:36.539 SQ Associations: Not Supported 00:31:36.539 UUID List: Not Supported 00:31:36.539 Multi-Domain Subsystem: Not Supported 00:31:36.540 Fixed Capacity Management: Not Supported 00:31:36.540 Variable Capacity Management: Not Supported 00:31:36.540 Delete Endurance Group: Not Supported 00:31:36.540 Delete NVM Set: Not Supported 00:31:36.541 Extended LBA Formats Supported: Not Supported 00:31:36.541 Flexible Data Placement Supported: Not Supported 00:31:36.541 00:31:36.541 Controller Memory Buffer Support 00:31:36.541 ================================ 00:31:36.541 Supported: No 00:31:36.541 00:31:36.541 Persistent Memory Region Support 00:31:36.541 ================================ 00:31:36.541 Supported: No 00:31:36.541 00:31:36.541 Admin Command Set Attributes 00:31:36.541 ============================ 00:31:36.541 Security Send/Receive: Not Supported 00:31:36.541 Format NVM: Not Supported 00:31:36.541 Firmware Activate/Download: Not Supported 00:31:36.541 Namespace Management: Not Supported 00:31:36.541 Device Self-Test: Not Supported 00:31:36.541 Directives: Not Supported 00:31:36.541 NVMe-MI: Not Supported 00:31:36.541 Virtualization Management: Not Supported 00:31:36.541 Doorbell Buffer Config: Not Supported 00:31:36.541 Get LBA Status Capability: Not Supported 00:31:36.541 Command & Feature Lockdown Capability: Not Supported 00:31:36.541 Abort Command Limit: 1 00:31:36.541 Async Event Request Limit: 1 00:31:36.541 Number of Firmware Slots: N/A 00:31:36.541 Firmware Slot 1 Read-Only: N/A 00:31:36.541 Firmware Activation Without Reset: N/A 00:31:36.541 Multiple Update Detection Support: N/A 00:31:36.541 Firmware Update Granularity: No Information Provided 00:31:36.541 Per-Namespace SMART Log: No 00:31:36.541 Asymmetric Namespace Access Log Page: Not Supported 00:31:36.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:36.541 Command Effects Log Page: Not Supported 00:31:36.541 Get Log Page Extended Data: Supported 00:31:36.541 Telemetry Log Pages: Not Supported 00:31:36.541 Persistent Event Log Pages: Not Supported 00:31:36.541 Supported Log Pages Log Page: May Support 00:31:36.541 Commands Supported & Effects Log Page: Not Supported 00:31:36.541 Feature Identifiers & Effects Log Page:May Support 00:31:36.541 NVMe-MI Commands & Effects Log Page: May Support 00:31:36.542 Data Area 4 for Telemetry Log: Not Supported 00:31:36.542 Error Log Page Entries Supported: 1 00:31:36.542 Keep Alive: Not Supported 00:31:36.542 00:31:36.542 NVM Command Set Attributes 00:31:36.542 ========================== 00:31:36.542 Submission Queue Entry Size 00:31:36.542 Max: 1 00:31:36.542 Min: 1 00:31:36.542 Completion Queue Entry Size 00:31:36.542 Max: 1 00:31:36.542 Min: 1 00:31:36.542 Number of Namespaces: 0 00:31:36.542 Compare Command: Not Supported 00:31:36.542 Write Uncorrectable Command: Not Supported 00:31:36.542 Dataset Management Command: Not Supported 00:31:36.542 Write Zeroes Command: Not Supported 00:31:36.542 Set Features Save Field: Not Supported 00:31:36.542 Reservations: Not Supported 00:31:36.542 Timestamp: Not Supported 00:31:36.542 Copy: Not Supported 00:31:36.542 Volatile Write Cache: Not Present 00:31:36.542 Atomic Write Unit (Normal): 1 00:31:36.542 Atomic Write Unit (PFail): 1 00:31:36.542 Atomic Compare & Write Unit: 1 00:31:36.542 Fused Compare & Write: Not Supported 00:31:36.542 Scatter-Gather List 00:31:36.542 SGL Command Set: Supported 00:31:36.543 SGL Keyed: Not Supported 00:31:36.543 SGL Bit Bucket Descriptor: Not Supported 00:31:36.543 SGL Metadata Pointer: Not Supported 00:31:36.543 Oversized SGL: Not Supported 00:31:36.543 SGL Metadata Address: Not Supported 00:31:36.543 SGL Offset: Supported 00:31:36.543 Transport SGL Data Block: Not Supported 00:31:36.543 Replay Protected Memory Block: Not Supported 00:31:36.543 00:31:36.543 Firmware Slot Information 00:31:36.543 ========================= 00:31:36.543 Active slot: 0 00:31:36.543 00:31:36.543 00:31:36.543 Error Log 00:31:36.543 ========= 00:31:36.543 00:31:36.543 Active Namespaces 00:31:36.543 ================= 00:31:36.543 Discovery Log Page 00:31:36.543 ================== 00:31:36.543 Generation Counter: 2 00:31:36.543 Number of Records: 2 00:31:36.543 Record Format: 0 00:31:36.543 00:31:36.543 Discovery Log Entry 0 00:31:36.543 ---------------------- 00:31:36.543 Transport Type: 3 (TCP) 00:31:36.543 Address Family: 1 (IPv4) 00:31:36.543 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:36.543 Entry Flags: 00:31:36.543 Duplicate Returned Information: 0 00:31:36.543 Explicit Persistent Connection Support for Discovery: 0 00:31:36.543 Transport Requirements: 00:31:36.543 Secure Channel: Not Specified 00:31:36.543 Port ID: 1 (0x0001) 00:31:36.543 Controller ID: 65535 (0xffff) 00:31:36.543 Admin Max SQ Size: 32 00:31:36.543 Transport Service Identifier: 4420 00:31:36.543 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:36.543 Transport Address: 10.0.0.1 00:31:36.543 Discovery Log Entry 1 00:31:36.544 ---------------------- 00:31:36.544 Transport Type: 3 (TCP) 00:31:36.544 Address Family: 1 (IPv4) 00:31:36.544 Subsystem Type: 2 (NVM Subsystem) 00:31:36.544 Entry Flags: 00:31:36.544 Duplicate Returned Information: 0 00:31:36.544 Explicit Persistent Connection Support for Discovery: 0 00:31:36.544 Transport Requirements: 00:31:36.544 Secure Channel: Not Specified 00:31:36.544 Port ID: 1 (0x0001) 00:31:36.544 Controller ID: 65535 (0xffff) 00:31:36.544 Admin Max SQ Size: 32 00:31:36.544 Transport Service Identifier: 4420 00:31:36.544 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:36.544 Transport Address: 10.0.0.1 00:31:36.544 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.811 get_feature(0x01) failed 00:31:36.811 get_feature(0x02) failed 00:31:36.811 get_feature(0x04) failed 00:31:36.811 ===================================================== 00:31:36.811 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:36.811 ===================================================== 00:31:36.811 Controller Capabilities/Features 00:31:36.811 ================================ 00:31:36.811 Vendor ID: 0000 00:31:36.811 Subsystem Vendor ID: 0000 00:31:36.811 Serial Number: 084f74f3e0cd2acf134b 00:31:36.811 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:36.811 Firmware Version: 6.8.9-20 00:31:36.811 Recommended Arb Burst: 6 00:31:36.811 IEEE OUI Identifier: 00 00 00 00:31:36.811 Multi-path I/O 00:31:36.811 May have multiple subsystem ports: Yes 00:31:36.811 May have multiple controllers: Yes 00:31:36.811 Associated with SR-IOV VF: No 00:31:36.811 Max Data Transfer Size: Unlimited 00:31:36.811 Max Number of Namespaces: 1024 00:31:36.811 Max Number of I/O Queues: 128 00:31:36.811 NVMe Specification Version (VS): 1.3 00:31:36.811 NVMe Specification Version (Identify): 1.3 00:31:36.811 Maximum Queue Entries: 1024 00:31:36.811 Contiguous Queues Required: No 00:31:36.811 Arbitration Mechanisms Supported 00:31:36.811 Weighted Round Robin: Not Supported 00:31:36.811 Vendor Specific: Not Supported 00:31:36.811 Reset Timeout: 7500 ms 00:31:36.811 Doorbell Stride: 4 bytes 00:31:36.811 NVM Subsystem Reset: Not Supported 00:31:36.811 Command Sets Supported 00:31:36.811 NVM Command Set: Supported 00:31:36.812 Boot Partition: Not Supported 00:31:36.812 Memory Page Size Minimum: 4096 bytes 00:31:36.812 Memory Page Size Maximum: 4096 bytes 00:31:36.812 Persistent Memory Region: Not Supported 00:31:36.812 Optional Asynchronous Events Supported 00:31:36.812 Namespace Attribute Notices: Supported 00:31:36.812 Firmware Activation Notices: Not Supported 00:31:36.812 ANA Change Notices: Supported 00:31:36.812 PLE Aggregate Log Change Notices: Not Supported 00:31:36.812 LBA Status Info Alert Notices: Not Supported 00:31:36.812 EGE Aggregate Log Change Notices: Not Supported 00:31:36.812 Normal NVM Subsystem Shutdown event: Not Supported 00:31:36.812 Zone Descriptor Change Notices: Not Supported 00:31:36.812 Discovery Log Change Notices: Not Supported 00:31:36.812 Controller Attributes 00:31:36.812 128-bit Host Identifier: Supported 00:31:36.812 Non-Operational Permissive Mode: Not Supported 00:31:36.812 NVM Sets: Not Supported 00:31:36.812 Read Recovery Levels: Not Supported 00:31:36.812 Endurance Groups: Not Supported 00:31:36.812 Predictable Latency Mode: Not Supported 00:31:36.812 Traffic Based Keep ALive: Supported 00:31:36.812 Namespace Granularity: Not Supported 00:31:36.812 SQ Associations: Not Supported 00:31:36.812 UUID List: Not Supported 00:31:36.812 Multi-Domain Subsystem: Not Supported 00:31:36.812 Fixed Capacity Management: Not Supported 00:31:36.812 Variable Capacity Management: Not Supported 00:31:36.812 Delete Endurance Group: Not Supported 00:31:36.812 Delete NVM Set: Not Supported 00:31:36.812 Extended LBA Formats Supported: Not Supported 00:31:36.812 Flexible Data Placement Supported: Not Supported 00:31:36.812 00:31:36.812 Controller Memory Buffer Support 00:31:36.812 ================================ 00:31:36.812 Supported: No 00:31:36.812 00:31:36.812 Persistent Memory Region Support 00:31:36.812 ================================ 00:31:36.812 Supported: No 00:31:36.812 00:31:36.812 Admin Command Set Attributes 00:31:36.812 ============================ 00:31:36.812 Security Send/Receive: Not Supported 00:31:36.812 Format NVM: Not Supported 00:31:36.812 Firmware Activate/Download: Not Supported 00:31:36.812 Namespace Management: Not Supported 00:31:36.812 Device Self-Test: Not Supported 00:31:36.812 Directives: Not Supported 00:31:36.812 NVMe-MI: Not Supported 00:31:36.812 Virtualization Management: Not Supported 00:31:36.812 Doorbell Buffer Config: Not Supported 00:31:36.812 Get LBA Status Capability: Not Supported 00:31:36.812 Command & Feature Lockdown Capability: Not Supported 00:31:36.812 Abort Command Limit: 4 00:31:36.812 Async Event Request Limit: 4 00:31:36.812 Number of Firmware Slots: N/A 00:31:36.812 Firmware Slot 1 Read-Only: N/A 00:31:36.812 Firmware Activation Without Reset: N/A 00:31:36.812 Multiple Update Detection Support: N/A 00:31:36.812 Firmware Update Granularity: No Information Provided 00:31:36.812 Per-Namespace SMART Log: Yes 00:31:36.812 Asymmetric Namespace Access Log Page: Supported 00:31:36.812 ANA Transition Time : 10 sec 00:31:36.812 00:31:36.812 Asymmetric Namespace Access Capabilities 00:31:36.812 ANA Optimized State : Supported 00:31:36.812 ANA Non-Optimized State : Supported 00:31:36.812 ANA Inaccessible State : Supported 00:31:36.812 ANA Persistent Loss State : Supported 00:31:36.812 ANA Change State : Supported 00:31:36.812 ANAGRPID is not changed : No 00:31:36.812 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:36.812 00:31:36.812 ANA Group Identifier Maximum : 128 00:31:36.812 Number of ANA Group Identifiers : 128 00:31:36.812 Max Number of Allowed Namespaces : 1024 00:31:36.812 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:36.812 Command Effects Log Page: Supported 00:31:36.812 Get Log Page Extended Data: Supported 00:31:36.812 Telemetry Log Pages: Not Supported 00:31:36.812 Persistent Event Log Pages: Not Supported 00:31:36.812 Supported Log Pages Log Page: May Support 00:31:36.812 Commands Supported & Effects Log Page: Not Supported 00:31:36.812 Feature Identifiers & Effects Log Page:May Support 00:31:36.812 NVMe-MI Commands & Effects Log Page: May Support 00:31:36.812 Data Area 4 for Telemetry Log: Not Supported 00:31:36.812 Error Log Page Entries Supported: 128 00:31:36.812 Keep Alive: Supported 00:31:36.812 Keep Alive Granularity: 1000 ms 00:31:36.812 00:31:36.812 NVM Command Set Attributes 00:31:36.812 ========================== 00:31:36.812 Submission Queue Entry Size 00:31:36.812 Max: 64 00:31:36.812 Min: 64 00:31:36.812 Completion Queue Entry Size 00:31:36.812 Max: 16 00:31:36.812 Min: 16 00:31:36.812 Number of Namespaces: 1024 00:31:36.812 Compare Command: Not Supported 00:31:36.812 Write Uncorrectable Command: Not Supported 00:31:36.812 Dataset Management Command: Supported 00:31:36.812 Write Zeroes Command: Supported 00:31:36.812 Set Features Save Field: Not Supported 00:31:36.812 Reservations: Not Supported 00:31:36.812 Timestamp: Not Supported 00:31:36.812 Copy: Not Supported 00:31:36.812 Volatile Write Cache: Present 00:31:36.812 Atomic Write Unit (Normal): 1 00:31:36.812 Atomic Write Unit (PFail): 1 00:31:36.812 Atomic Compare & Write Unit: 1 00:31:36.812 Fused Compare & Write: Not Supported 00:31:36.812 Scatter-Gather List 00:31:36.812 SGL Command Set: Supported 00:31:36.812 SGL Keyed: Not Supported 00:31:36.812 SGL Bit Bucket Descriptor: Not Supported 00:31:36.812 SGL Metadata Pointer: Not Supported 00:31:36.812 Oversized SGL: Not Supported 00:31:36.812 SGL Metadata Address: Not Supported 00:31:36.812 SGL Offset: Supported 00:31:36.812 Transport SGL Data Block: Not Supported 00:31:36.812 Replay Protected Memory Block: Not Supported 00:31:36.812 00:31:36.812 Firmware Slot Information 00:31:36.812 ========================= 00:31:36.812 Active slot: 0 00:31:36.812 00:31:36.812 Asymmetric Namespace Access 00:31:36.812 =========================== 00:31:36.812 Change Count : 0 00:31:36.812 Number of ANA Group Descriptors : 1 00:31:36.812 ANA Group Descriptor : 0 00:31:36.812 ANA Group ID : 1 00:31:36.812 Number of NSID Values : 1 00:31:36.812 Change Count : 0 00:31:36.812 ANA State : 1 00:31:36.812 Namespace Identifier : 1 00:31:36.812 00:31:36.812 Commands Supported and Effects 00:31:36.812 ============================== 00:31:36.812 Admin Commands 00:31:36.812 -------------- 00:31:36.812 Get Log Page (02h): Supported 00:31:36.812 Identify (06h): Supported 00:31:36.812 Abort (08h): Supported 00:31:36.812 Set Features (09h): Supported 00:31:36.812 Get Features (0Ah): Supported 00:31:36.812 Asynchronous Event Request (0Ch): Supported 00:31:36.812 Keep Alive (18h): Supported 00:31:36.812 I/O Commands 00:31:36.812 ------------ 00:31:36.812 Flush (00h): Supported 00:31:36.812 Write (01h): Supported LBA-Change 00:31:36.812 Read (02h): Supported 00:31:36.812 Write Zeroes (08h): Supported LBA-Change 00:31:36.812 Dataset Management (09h): Supported 00:31:36.812 00:31:36.812 Error Log 00:31:36.812 ========= 00:31:36.812 Entry: 0 00:31:36.812 Error Count: 0x3 00:31:36.812 Submission Queue Id: 0x0 00:31:36.812 Command Id: 0x5 00:31:36.812 Phase Bit: 0 00:31:36.812 Status Code: 0x2 00:31:36.812 Status Code Type: 0x0 00:31:36.812 Do Not Retry: 1 00:31:36.812 Error Location: 0x28 00:31:36.812 LBA: 0x0 00:31:36.812 Namespace: 0x0 00:31:36.812 Vendor Log Page: 0x0 00:31:36.812 ----------- 00:31:36.812 Entry: 1 00:31:36.812 Error Count: 0x2 00:31:36.812 Submission Queue Id: 0x0 00:31:36.812 Command Id: 0x5 00:31:36.812 Phase Bit: 0 00:31:36.812 Status Code: 0x2 00:31:36.812 Status Code Type: 0x0 00:31:36.812 Do Not Retry: 1 00:31:36.812 Error Location: 0x28 00:31:36.812 LBA: 0x0 00:31:36.812 Namespace: 0x0 00:31:36.812 Vendor Log Page: 0x0 00:31:36.812 ----------- 00:31:36.812 Entry: 2 00:31:36.812 Error Count: 0x1 00:31:36.812 Submission Queue Id: 0x0 00:31:36.812 Command Id: 0x4 00:31:36.812 Phase Bit: 0 00:31:36.812 Status Code: 0x2 00:31:36.812 Status Code Type: 0x0 00:31:36.812 Do Not Retry: 1 00:31:36.812 Error Location: 0x28 00:31:36.812 LBA: 0x0 00:31:36.812 Namespace: 0x0 00:31:36.812 Vendor Log Page: 0x0 00:31:36.812 00:31:36.812 Number of Queues 00:31:36.812 ================ 00:31:36.812 Number of I/O Submission Queues: 128 00:31:36.812 Number of I/O Completion Queues: 128 00:31:36.812 00:31:36.812 ZNS Specific Controller Data 00:31:36.812 ============================ 00:31:36.812 Zone Append Size Limit: 0 00:31:36.812 00:31:36.812 00:31:36.812 Active Namespaces 00:31:36.812 ================= 00:31:36.812 get_feature(0x05) failed 00:31:36.812 Namespace ID:1 00:31:36.812 Command Set Identifier: NVM (00h) 00:31:36.812 Deallocate: Supported 00:31:36.812 Deallocated/Unwritten Error: Not Supported 00:31:36.812 Deallocated Read Value: Unknown 00:31:36.812 Deallocate in Write Zeroes: Not Supported 00:31:36.812 Deallocated Guard Field: 0xFFFF 00:31:36.812 Flush: Supported 00:31:36.812 Reservation: Not Supported 00:31:36.812 Namespace Sharing Capabilities: Multiple Controllers 00:31:36.813 Size (in LBAs): 3750748848 (1788GiB) 00:31:36.813 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:36.813 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:36.813 UUID: f061f13b-55d6-4062-89e0-0ba75e42eb94 00:31:36.813 Thin Provisioning: Not Supported 00:31:36.813 Per-NS Atomic Units: Yes 00:31:36.813 Atomic Write Unit (Normal): 8 00:31:36.813 Atomic Write Unit (PFail): 8 00:31:36.813 Preferred Write Granularity: 8 00:31:36.813 Atomic Compare & Write Unit: 8 00:31:36.813 Atomic Boundary Size (Normal): 0 00:31:36.813 Atomic Boundary Size (PFail): 0 00:31:36.813 Atomic Boundary Offset: 0 00:31:36.813 NGUID/EUI64 Never Reused: No 00:31:36.813 ANA group ID: 1 00:31:36.813 Namespace Write Protected: No 00:31:36.813 Number of LBA Formats: 1 00:31:36.813 Current LBA Format: LBA Format #00 00:31:36.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:36.813 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.813 rmmod nvme_tcp 00:31:36.813 rmmod nvme_fabrics 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.813 17:47:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:31:39.355 17:47:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:42.656 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:42.656 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:42.917 00:31:42.917 real 0m19.889s 00:31:42.917 user 0m5.410s 00:31:42.917 sys 0m11.421s 00:31:42.917 17:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:42.917 17:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.917 ************************************ 00:31:42.917 END TEST nvmf_identify_kernel_target 00:31:42.917 ************************************ 00:31:43.179 17:47:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:43.179 17:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:43.179 17:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:43.179 17:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.179 ************************************ 00:31:43.179 START TEST nvmf_auth_host 00:31:43.179 ************************************ 00:31:43.179 17:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:43.179 * Looking for test storage... 00:31:43.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.179 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.441 --rc genhtml_branch_coverage=1 00:31:43.441 --rc genhtml_function_coverage=1 00:31:43.441 --rc genhtml_legend=1 00:31:43.441 --rc geninfo_all_blocks=1 00:31:43.441 --rc geninfo_unexecuted_blocks=1 00:31:43.441 00:31:43.441 ' 00:31:43.441 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.442 --rc genhtml_branch_coverage=1 00:31:43.442 --rc genhtml_function_coverage=1 00:31:43.442 --rc genhtml_legend=1 00:31:43.442 --rc geninfo_all_blocks=1 00:31:43.442 --rc geninfo_unexecuted_blocks=1 00:31:43.442 00:31:43.442 ' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.442 --rc genhtml_branch_coverage=1 00:31:43.442 --rc genhtml_function_coverage=1 00:31:43.442 --rc genhtml_legend=1 00:31:43.442 --rc geninfo_all_blocks=1 00:31:43.442 --rc geninfo_unexecuted_blocks=1 00:31:43.442 00:31:43.442 ' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.442 --rc genhtml_branch_coverage=1 00:31:43.442 --rc genhtml_function_coverage=1 00:31:43.442 --rc genhtml_legend=1 00:31:43.442 --rc geninfo_all_blocks=1 00:31:43.442 --rc geninfo_unexecuted_blocks=1 00:31:43.442 00:31:43.442 ' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:43.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.442 17:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.582 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:51.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:51.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:51.583 Found net devices under 0000:31:00.0: cvl_0_0 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:51.583 Found net devices under 0000:31:00.1: cvl_0_1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:31:51.583 00:31:51.583 --- 10.0.0.2 ping statistics --- 00:31:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.583 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:31:51.583 00:31:51.583 --- 10.0.0.1 ping statistics --- 00:31:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.583 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.583 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=526049 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 526049 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 526049 ']' 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.584 17:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b7fff51fc793b24b1e7ef1b95853811a 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.m4y 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b7fff51fc793b24b1e7ef1b95853811a 0 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b7fff51fc793b24b1e7ef1b95853811a 0 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b7fff51fc793b24b1e7ef1b95853811a 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:31:51.584 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.m4y 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.m4y 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.m4y 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ea369335c8b4a3f9a7a49dc8c5aca185ac2f4557d775bd73ec1cbf598079e5a2 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.MBN 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ea369335c8b4a3f9a7a49dc8c5aca185ac2f4557d775bd73ec1cbf598079e5a2 3 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ea369335c8b4a3f9a7a49dc8c5aca185ac2f4557d775bd73ec1cbf598079e5a2 3 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ea369335c8b4a3f9a7a49dc8c5aca185ac2f4557d775bd73ec1cbf598079e5a2 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.MBN 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.MBN 00:31:51.845 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MBN 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a522fc025e5e6ac7f5640573b47abfed42ca0a93ecde1e75 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.3ax 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a522fc025e5e6ac7f5640573b47abfed42ca0a93ecde1e75 0 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a522fc025e5e6ac7f5640573b47abfed42ca0a93ecde1e75 0 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a522fc025e5e6ac7f5640573b47abfed42ca0a93ecde1e75 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.3ax 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.3ax 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3ax 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7eb69240321fd6df2d90037b20c965e48685be63d86965bc 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Ggb 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7eb69240321fd6df2d90037b20c965e48685be63d86965bc 2 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7eb69240321fd6df2d90037b20c965e48685be63d86965bc 2 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7eb69240321fd6df2d90037b20c965e48685be63d86965bc 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Ggb 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Ggb 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Ggb 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ab1e5397832947fa0c16c2bd64302ca5 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.1Zp 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ab1e5397832947fa0c16c2bd64302ca5 1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ab1e5397832947fa0c16c2bd64302ca5 1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ab1e5397832947fa0c16c2bd64302ca5 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:31:51.846 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.1Zp 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.1Zp 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1Zp 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:52.107 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3ef2b72c54f201641983858e9a32d34c 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.xyZ 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3ef2b72c54f201641983858e9a32d34c 1 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3ef2b72c54f201641983858e9a32d34c 1 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3ef2b72c54f201641983858e9a32d34c 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.xyZ 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.xyZ 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xyZ 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3b67f0930d9c47802751e927a6480d9f5e529437e2088a51 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.4Ia 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3b67f0930d9c47802751e927a6480d9f5e529437e2088a51 2 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3b67f0930d9c47802751e927a6480d9f5e529437e2088a51 2 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3b67f0930d9c47802751e927a6480d9f5e529437e2088a51 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:31:52.108 17:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.4Ia 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.4Ia 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4Ia 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=51a48d6241a55f53ddf278af7959a5b1 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Sbo 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 51a48d6241a55f53ddf278af7959a5b1 0 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 51a48d6241a55f53ddf278af7959a5b1 0 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=51a48d6241a55f53ddf278af7959a5b1 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Sbo 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Sbo 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Sbo 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=772598d8cb0e1e76a66677884c8a321f35517d44bc529622085a07270ed16e52 00:31:52.108 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.goh 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 772598d8cb0e1e76a66677884c8a321f35517d44bc529622085a07270ed16e52 3 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 772598d8cb0e1e76a66677884c8a321f35517d44bc529622085a07270ed16e52 3 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=772598d8cb0e1e76a66677884c8a321f35517d44bc529622085a07270ed16e52 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.goh 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.goh 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.goh 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 526049 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 526049 ']' 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.m4y 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MBN ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MBN 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3ax 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Ggb ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ggb 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1Zp 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.369 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xyZ ]] 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xyZ 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4Ia 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Sbo ]] 00:31:52.636 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Sbo 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.goh 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:52.637 17:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:55.941 Waiting for block devices as requested 00:31:55.941 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:56.200 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:56.200 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:56.200 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:56.200 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:56.460 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:56.460 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:56.460 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:56.719 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:56.719 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:56.979 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:56.979 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:56.979 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:56.979 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:57.239 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:57.239 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:57.239 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:58.180 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:58.181 No valid GPT data, bailing 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:31:58.181 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:58.441 00:31:58.441 Discovery Log Number of Records 2, Generation counter 2 00:31:58.441 =====Discovery Log Entry 0====== 00:31:58.441 trtype: tcp 00:31:58.441 adrfam: ipv4 00:31:58.441 subtype: current discovery subsystem 00:31:58.441 treq: not specified, sq flow control disable supported 00:31:58.441 portid: 1 00:31:58.441 trsvcid: 4420 00:31:58.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:58.441 traddr: 10.0.0.1 00:31:58.441 eflags: none 00:31:58.441 sectype: none 00:31:58.441 =====Discovery Log Entry 1====== 00:31:58.441 trtype: tcp 00:31:58.441 adrfam: ipv4 00:31:58.441 subtype: nvme subsystem 00:31:58.441 treq: not specified, sq flow control disable supported 00:31:58.441 portid: 1 00:31:58.441 trsvcid: 4420 00:31:58.441 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:58.441 traddr: 10.0.0.1 00:31:58.441 eflags: none 00:31:58.441 sectype: none 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.441 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.442 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.702 nvme0n1 00:31:58.702 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.702 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.702 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.702 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.703 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.963 nvme0n1 00:31:58.963 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.963 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.963 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.964 17:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.224 nvme0n1 00:31:59.224 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.224 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.224 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.224 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.224 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.225 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.485 nvme0n1 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.485 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.486 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 nvme0n1 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 nvme0n1 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.747 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.008 17:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.268 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.269 nvme0n1 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.269 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.530 nvme0n1 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.530 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.791 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.792 nvme0n1 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.792 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.052 17:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.052 nvme0n1 00:32:01.052 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.052 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.052 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.052 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.052 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.313 nvme0n1 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.313 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.574 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.145 17:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.405 nvme0n1 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.405 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.406 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.667 nvme0n1 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.667 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.928 nvme0n1 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.928 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.188 17:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.448 nvme0n1 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.448 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.449 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.709 nvme0n1 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.709 17:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.619 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.620 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.880 nvme0n1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.880 17:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.451 nvme0n1 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.451 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.021 nvme0n1 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:07.021 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.022 17:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.282 nvme0n1 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.282 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.542 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.802 nvme0n1 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.802 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.061 17:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.629 nvme0n1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.629 17:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.569 nvme0n1 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:09.569 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.570 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.141 nvme0n1 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.141 17:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:10.141 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:10.142 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.142 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.142 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.711 nvme0n1 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.711 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.973 17:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.545 nvme0n1 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.545 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.806 nvme0n1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.806 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 nvme0n1 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.067 17:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.067 nvme0n1 00:32:12.067 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.328 nvme0n1 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.328 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.589 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.590 nvme0n1 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.590 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:12.850 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.851 nvme0n1 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.851 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.111 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.112 17:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.112 nvme0n1 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.112 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.372 nvme0n1 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.372 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 nvme0n1 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.894 nvme0n1 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.894 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.155 17:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.416 nvme0n1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.416 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.677 nvme0n1 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:14.677 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.678 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.938 nvme0n1 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.938 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:14.939 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:15.199 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.199 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.199 17:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.199 nvme0n1 00:32:15.199 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.199 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.199 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.199 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.199 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.460 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.720 nvme0n1 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.720 17:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.292 nvme0n1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.292 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.554 nvme0n1 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.554 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:16.814 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.815 17:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.075 nvme0n1 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.075 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.336 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 nvme0n1 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.857 17:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 nvme0n1 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.118 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 nvme0n1 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.059 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.060 17:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.630 nvme0n1 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.630 17:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.201 nvme0n1 00:32:20.201 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.201 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.201 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.201 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.201 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.462 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.032 nvme0n1 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.032 17:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.603 nvme0n1 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.603 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:21.863 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.864 nvme0n1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.864 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.124 17:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.124 nvme0n1 00:32:22.124 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.125 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.386 nvme0n1 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.386 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.387 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.647 nvme0n1 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.647 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 nvme0n1 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.908 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.168 nvme0n1 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.168 17:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.168 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.169 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.430 nvme0n1 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.430 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.690 nvme0n1 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.691 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.952 nvme0n1 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.952 17:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.213 nvme0n1 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.213 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.474 nvme0n1 00:32:24.474 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.474 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.475 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.736 nvme0n1 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.736 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.996 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.997 17:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.258 nvme0n1 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.258 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 nvme0n1 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.520 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.781 nvme0n1 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.781 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.042 17:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.303 nvme0n1 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.303 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.304 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.563 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.563 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.563 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.564 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.824 nvme0n1 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.824 17:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.395 nvme0n1 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.395 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.966 nvme0n1 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:27.966 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.967 17:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.227 nvme0n1 00:32:28.227 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.227 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.227 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.227 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.227 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjdmZmY1MWZjNzkzYjI0YjFlN2VmMWI5NTg1MzgxMWHLhTPb: 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWEzNjkzMzVjOGI0YTNmOWE3YTQ5ZGM4YzVhY2ExODVhYzJmNDU1N2Q3NzViZDczZWMxY2JmNTk4MDc5ZTVhMl13cXQ=: 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:28.487 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.488 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.060 nvme0n1 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.060 17:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.060 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.002 nvme0n1 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.002 17:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.573 nvme0n1 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2I2N2YwOTMwZDljNDc4MDI3NTFlOTI3YTY0ODBkOWY1ZTUyOTQzN2UyMDg4YTUxnv/ujQ==: 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTFhNDhkNjI0MWE1NWY1M2RkZjI3OGFmNzk1OWE1YjFBafEw: 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.573 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.574 17:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 nvme0n1 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.404 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzcyNTk4ZDhjYjBlMWU3NmE2NjY3Nzg4NGM4YTMyMWYzNTUxN2Q0NGJjNTI5NjIyMDg1YTA3MjcwZWQxNmU1MqLcWSE=: 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.405 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 nvme0n1 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 request: 00:32:31.977 { 00:32:31.977 "name": "nvme0", 00:32:31.977 "trtype": "tcp", 00:32:31.977 "traddr": "10.0.0.1", 00:32:31.977 "adrfam": "ipv4", 00:32:31.977 "trsvcid": "4420", 00:32:31.977 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:31.977 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:31.977 "prchk_reftag": false, 00:32:31.977 "prchk_guard": false, 00:32:31.977 "hdgst": false, 00:32:31.977 "ddgst": false, 00:32:31.977 "allow_unrecognized_csi": false, 00:32:31.977 "method": "bdev_nvme_attach_controller", 00:32:31.977 "req_id": 1 00:32:31.977 } 00:32:31.977 Got JSON-RPC error response 00:32:31.977 response: 00:32:31.977 { 00:32:31.977 "code": -5, 00:32:31.977 "message": "Input/output error" 00:32:31.977 } 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:31.977 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.978 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:32.237 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.238 17:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.238 request: 00:32:32.238 { 00:32:32.238 "name": "nvme0", 00:32:32.238 "trtype": "tcp", 00:32:32.238 "traddr": "10.0.0.1", 00:32:32.238 "adrfam": "ipv4", 00:32:32.238 "trsvcid": "4420", 00:32:32.238 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.238 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.238 "prchk_reftag": false, 00:32:32.238 "prchk_guard": false, 00:32:32.238 "hdgst": false, 00:32:32.238 "ddgst": false, 00:32:32.238 "dhchap_key": "key2", 00:32:32.238 "allow_unrecognized_csi": false, 00:32:32.238 "method": "bdev_nvme_attach_controller", 00:32:32.238 "req_id": 1 00:32:32.238 } 00:32:32.238 Got JSON-RPC error response 00:32:32.238 response: 00:32:32.238 { 00:32:32.238 "code": -5, 00:32:32.238 "message": "Input/output error" 00:32:32.238 } 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.238 request: 00:32:32.238 { 00:32:32.238 "name": "nvme0", 00:32:32.238 "trtype": "tcp", 00:32:32.238 "traddr": "10.0.0.1", 00:32:32.238 "adrfam": "ipv4", 00:32:32.238 "trsvcid": "4420", 00:32:32.238 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.238 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.238 "prchk_reftag": false, 00:32:32.238 "prchk_guard": false, 00:32:32.238 "hdgst": false, 00:32:32.238 "ddgst": false, 00:32:32.238 "dhchap_key": "key1", 00:32:32.238 "dhchap_ctrlr_key": "ckey2", 00:32:32.238 "allow_unrecognized_csi": false, 00:32:32.238 "method": "bdev_nvme_attach_controller", 00:32:32.238 "req_id": 1 00:32:32.238 } 00:32:32.238 Got JSON-RPC error response 00:32:32.238 response: 00:32:32.238 { 00:32:32.238 "code": -5, 00:32:32.238 "message": "Input/output error" 00:32:32.238 } 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.238 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.498 nvme0n1 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.498 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 request: 00:32:32.758 { 00:32:32.758 "name": "nvme0", 00:32:32.758 "dhchap_key": "key1", 00:32:32.758 "dhchap_ctrlr_key": "ckey2", 00:32:32.759 "method": "bdev_nvme_set_keys", 00:32:32.759 "req_id": 1 00:32:32.759 } 00:32:32.759 Got JSON-RPC error response 00:32:32.759 response: 00:32:32.759 { 00:32:32.759 "code": -13, 00:32:32.759 "message": "Permission denied" 00:32:32.759 } 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:32.759 17:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:33.699 17:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUyMmZjMDI1ZTVlNmFjN2Y1NjQwNTczYjQ3YWJmZWQ0MmNhMGE5M2VjZGUxZTc1Wu7z9g==: 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: ]] 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2ViNjkyNDAzMjFmZDZkZjJkOTAwMzdiMjBjOTY1ZTQ4Njg1YmU2M2Q4Njk2NWJjWuiOeQ==: 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:35.082 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.083 nvme0n1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWIxZTUzOTc4MzI5NDdmYTBjMTZjMmJkNjQzMDJjYTWfR6oW: 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VmMmI3MmM1NGYyMDE2NDE5ODM4NThlOWEzMmQzNGNpH6v1: 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.083 request: 00:32:35.083 { 00:32:35.083 "name": "nvme0", 00:32:35.083 "dhchap_key": "key2", 00:32:35.083 "dhchap_ctrlr_key": "ckey1", 00:32:35.083 "method": "bdev_nvme_set_keys", 00:32:35.083 "req_id": 1 00:32:35.083 } 00:32:35.083 Got JSON-RPC error response 00:32:35.083 response: 00:32:35.083 { 00:32:35.083 "code": -13, 00:32:35.083 "message": "Permission denied" 00:32:35.083 } 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:35.083 17:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:36.025 17:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.025 17:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:36.025 17:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.025 17:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.025 17:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.025 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.285 rmmod nvme_tcp 00:32:36.285 rmmod nvme_fabrics 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 526049 ']' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 526049 ']' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526049' 00:32:36.285 killing process with pid 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 526049 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.285 17:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:32:38.831 17:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:42.129 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:42.129 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:42.701 17:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.m4y /tmp/spdk.key-null.3ax /tmp/spdk.key-sha256.1Zp /tmp/spdk.key-sha384.4Ia /tmp/spdk.key-sha512.goh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:42.701 17:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:46.003 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:46.003 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:46.003 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:46.573 00:32:46.573 real 1m3.303s 00:32:46.573 user 0m57.112s 00:32:46.573 sys 0m16.038s 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.573 ************************************ 00:32:46.573 END TEST nvmf_auth_host 00:32:46.573 ************************************ 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.573 ************************************ 00:32:46.573 START TEST nvmf_digest 00:32:46.573 ************************************ 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:46.573 * Looking for test storage... 00:32:46.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:46.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.573 --rc genhtml_branch_coverage=1 00:32:46.573 --rc genhtml_function_coverage=1 00:32:46.573 --rc genhtml_legend=1 00:32:46.573 --rc geninfo_all_blocks=1 00:32:46.573 --rc geninfo_unexecuted_blocks=1 00:32:46.573 00:32:46.573 ' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:46.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.573 --rc genhtml_branch_coverage=1 00:32:46.573 --rc genhtml_function_coverage=1 00:32:46.573 --rc genhtml_legend=1 00:32:46.573 --rc geninfo_all_blocks=1 00:32:46.573 --rc geninfo_unexecuted_blocks=1 00:32:46.573 00:32:46.573 ' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:46.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.573 --rc genhtml_branch_coverage=1 00:32:46.573 --rc genhtml_function_coverage=1 00:32:46.573 --rc genhtml_legend=1 00:32:46.573 --rc geninfo_all_blocks=1 00:32:46.573 --rc geninfo_unexecuted_blocks=1 00:32:46.573 00:32:46.573 ' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:46.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.573 --rc genhtml_branch_coverage=1 00:32:46.573 --rc genhtml_function_coverage=1 00:32:46.573 --rc genhtml_legend=1 00:32:46.573 --rc geninfo_all_blocks=1 00:32:46.573 --rc geninfo_unexecuted_blocks=1 00:32:46.573 00:32:46.573 ' 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.573 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.574 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.834 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.834 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.834 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.834 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.835 17:48:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:54.974 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:54.975 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:54.975 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:54.975 Found net devices under 0000:31:00.0: cvl_0_0 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:54.975 Found net devices under 0000:31:00.1: cvl_0_1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:54.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:54.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:32:54.975 00:32:54.975 --- 10.0.0.2 ping statistics --- 00:32:54.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.975 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:54.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:54.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:32:54.975 00:32:54.975 --- 10.0.0.1 ping statistics --- 00:32:54.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.975 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:54.975 17:48:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.975 ************************************ 00:32:54.975 START TEST nvmf_digest_clean 00:32:54.975 ************************************ 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=544276 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 544276 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 544276 ']' 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.975 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.975 [2024-10-08 17:48:46.122900] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:32:54.975 [2024-10-08 17:48:46.122963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.975 [2024-10-08 17:48:46.214171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.975 [2024-10-08 17:48:46.308399] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.975 [2024-10-08 17:48:46.308463] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.975 [2024-10-08 17:48:46.308472] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.976 [2024-10-08 17:48:46.308479] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.976 [2024-10-08 17:48:46.308485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.976 [2024-10-08 17:48:46.309290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.976 17:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:55.236 null0 00:32:55.236 [2024-10-08 17:48:47.030024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.236 [2024-10-08 17:48:47.054231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=544457 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 544457 /var/tmp/bperf.sock 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 544457 ']' 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:55.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.236 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:55.236 [2024-10-08 17:48:47.110193] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:32:55.236 [2024-10-08 17:48:47.110242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544457 ] 00:32:55.236 [2024-10-08 17:48:47.187922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.496 [2024-10-08 17:48:47.253807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.068 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.068 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:56.068 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:56.068 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:56.068 17:48:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:56.328 17:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.328 17:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.595 nvme0n1 00:32:56.595 17:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:56.595 17:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:56.595 Running I/O for 2 seconds... 00:32:58.919 18562.00 IOPS, 72.51 MiB/s [2024-10-08T15:48:50.911Z] 19750.00 IOPS, 77.15 MiB/s 00:32:58.919 Latency(us) 00:32:58.919 [2024-10-08T15:48:50.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.919 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:58.919 nvme0n1 : 2.00 19783.04 77.28 0.00 0.00 6462.89 2389.33 21626.88 00:32:58.919 [2024-10-08T15:48:50.911Z] =================================================================================================================== 00:32:58.919 [2024-10-08T15:48:50.911Z] Total : 19783.04 77.28 0.00 0.00 6462.89 2389.33 21626.88 00:32:58.919 { 00:32:58.919 "results": [ 00:32:58.919 { 00:32:58.919 "job": "nvme0n1", 00:32:58.919 "core_mask": "0x2", 00:32:58.919 "workload": "randread", 00:32:58.919 "status": "finished", 00:32:58.919 "queue_depth": 128, 00:32:58.919 "io_size": 4096, 00:32:58.919 "runtime": 2.00404, 00:32:58.919 "iops": 19783.038262709328, 00:32:58.919 "mibps": 77.27749321370831, 00:32:58.919 "io_failed": 0, 00:32:58.919 "io_timeout": 0, 00:32:58.919 "avg_latency_us": 6462.885195311843, 00:32:58.919 "min_latency_us": 2389.3333333333335, 00:32:58.919 "max_latency_us": 21626.88 00:32:58.919 } 00:32:58.919 ], 00:32:58.919 "core_count": 1 00:32:58.919 } 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:58.919 | select(.opcode=="crc32c") 00:32:58.919 | "\(.module_name) \(.executed)"' 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 544457 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 544457 ']' 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 544457 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544457 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544457' 00:32:58.919 killing process with pid 544457 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 544457 00:32:58.919 Received shutdown signal, test time was about 2.000000 seconds 00:32:58.919 00:32:58.919 Latency(us) 00:32:58.919 [2024-10-08T15:48:50.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.919 [2024-10-08T15:48:50.911Z] =================================================================================================================== 00:32:58.919 [2024-10-08T15:48:50.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.919 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 544457 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=545152 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 545152 /var/tmp/bperf.sock 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 545152 ']' 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.181 17:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:59.181 [2024-10-08 17:48:51.020131] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:32:59.181 [2024-10-08 17:48:51.020183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545152 ] 00:32:59.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:59.181 Zero copy mechanism will not be used. 00:32:59.181 [2024-10-08 17:48:51.098795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.181 [2024-10-08 17:48:51.152075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.123 17:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.123 17:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:00.123 17:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:00.123 17:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:00.123 17:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:00.123 17:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.123 17:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.694 nvme0n1 00:33:00.694 17:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:00.694 17:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.694 Zero copy mechanism will not be used. 00:33:00.694 Running I/O for 2 seconds... 00:33:02.577 4150.00 IOPS, 518.75 MiB/s [2024-10-08T15:48:54.569Z] 3812.00 IOPS, 476.50 MiB/s 00:33:02.577 Latency(us) 00:33:02.577 [2024-10-08T15:48:54.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.578 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:02.578 nvme0n1 : 2.01 3813.11 476.64 0.00 0.00 4191.55 860.16 8519.68 00:33:02.578 [2024-10-08T15:48:54.570Z] =================================================================================================================== 00:33:02.578 [2024-10-08T15:48:54.570Z] Total : 3813.11 476.64 0.00 0.00 4191.55 860.16 8519.68 00:33:02.578 { 00:33:02.578 "results": [ 00:33:02.578 { 00:33:02.578 "job": "nvme0n1", 00:33:02.578 "core_mask": "0x2", 00:33:02.578 "workload": "randread", 00:33:02.578 "status": "finished", 00:33:02.578 "queue_depth": 16, 00:33:02.578 "io_size": 131072, 00:33:02.578 "runtime": 2.007287, 00:33:02.578 "iops": 3813.106944846452, 00:33:02.578 "mibps": 476.6383681058065, 00:33:02.578 "io_failed": 0, 00:33:02.578 "io_timeout": 0, 00:33:02.578 "avg_latency_us": 4191.551927532445, 00:33:02.578 "min_latency_us": 860.16, 00:33:02.578 "max_latency_us": 8519.68 00:33:02.578 } 00:33:02.578 ], 00:33:02.578 "core_count": 1 00:33:02.578 } 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:02.838 | select(.opcode=="crc32c") 00:33:02.838 | "\(.module_name) \(.executed)"' 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 545152 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 545152 ']' 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 545152 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545152 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545152' 00:33:02.838 killing process with pid 545152 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 545152 00:33:02.838 Received shutdown signal, test time was about 2.000000 seconds 00:33:02.838 00:33:02.838 Latency(us) 00:33:02.838 [2024-10-08T15:48:54.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.838 [2024-10-08T15:48:54.830Z] =================================================================================================================== 00:33:02.838 [2024-10-08T15:48:54.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:02.838 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 545152 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=546041 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 546041 /var/tmp/bperf.sock 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 546041 ']' 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.100 17:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.100 [2024-10-08 17:48:54.993292] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:03.100 [2024-10-08 17:48:54.993350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546041 ] 00:33:03.100 [2024-10-08 17:48:55.067953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.360 [2024-10-08 17:48:55.121313] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.930 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:03.930 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:03.930 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:03.930 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:03.930 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:04.191 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.191 17:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.451 nvme0n1 00:33:04.451 17:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:04.451 17:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:04.712 Running I/O for 2 seconds... 00:33:06.595 30522.00 IOPS, 119.23 MiB/s [2024-10-08T15:48:58.587Z] 30534.00 IOPS, 119.27 MiB/s 00:33:06.595 Latency(us) 00:33:06.595 [2024-10-08T15:48:58.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.595 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.595 nvme0n1 : 2.00 30546.23 119.32 0.00 0.00 4184.91 2061.65 7372.80 00:33:06.595 [2024-10-08T15:48:58.587Z] =================================================================================================================== 00:33:06.595 [2024-10-08T15:48:58.587Z] Total : 30546.23 119.32 0.00 0.00 4184.91 2061.65 7372.80 00:33:06.595 { 00:33:06.595 "results": [ 00:33:06.595 { 00:33:06.595 "job": "nvme0n1", 00:33:06.595 "core_mask": "0x2", 00:33:06.595 "workload": "randwrite", 00:33:06.595 "status": "finished", 00:33:06.595 "queue_depth": 128, 00:33:06.595 "io_size": 4096, 00:33:06.595 "runtime": 2.004568, 00:33:06.595 "iops": 30546.232405186554, 00:33:06.595 "mibps": 119.32122033275998, 00:33:06.595 "io_failed": 0, 00:33:06.595 "io_timeout": 0, 00:33:06.595 "avg_latency_us": 4184.909774409895, 00:33:06.595 "min_latency_us": 2061.653333333333, 00:33:06.595 "max_latency_us": 7372.8 00:33:06.595 } 00:33:06.595 ], 00:33:06.595 "core_count": 1 00:33:06.595 } 00:33:06.595 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:06.595 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:06.595 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:06.595 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:06.595 | select(.opcode=="crc32c") 00:33:06.595 | "\(.module_name) \(.executed)"' 00:33:06.595 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 546041 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 546041 ']' 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 546041 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 546041 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 546041' 00:33:06.856 killing process with pid 546041 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 546041 00:33:06.856 Received shutdown signal, test time was about 2.000000 seconds 00:33:06.856 00:33:06.856 Latency(us) 00:33:06.856 [2024-10-08T15:48:58.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.856 [2024-10-08T15:48:58.848Z] =================================================================================================================== 00:33:06.856 [2024-10-08T15:48:58.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:06.856 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 546041 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=546832 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 546832 /var/tmp/bperf.sock 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 546832 ']' 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.122 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.123 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.123 17:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.123 [2024-10-08 17:48:58.912564] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:07.123 [2024-10-08 17:48:58.912617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546832 ] 00:33:07.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.123 Zero copy mechanism will not be used. 00:33:07.123 [2024-10-08 17:48:58.991363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.123 [2024-10-08 17:48:59.042930] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.065 17:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.326 nvme0n1 00:33:08.326 17:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:08.326 17:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:08.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.326 Zero copy mechanism will not be used. 00:33:08.326 Running I/O for 2 seconds... 00:33:10.651 4259.00 IOPS, 532.38 MiB/s [2024-10-08T15:49:02.643Z] 5905.00 IOPS, 738.12 MiB/s 00:33:10.651 Latency(us) 00:33:10.651 [2024-10-08T15:49:02.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.651 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:10.651 nvme0n1 : 2.01 5899.71 737.46 0.00 0.00 2706.93 1153.71 14636.37 00:33:10.651 [2024-10-08T15:49:02.643Z] =================================================================================================================== 00:33:10.651 [2024-10-08T15:49:02.643Z] Total : 5899.71 737.46 0.00 0.00 2706.93 1153.71 14636.37 00:33:10.651 { 00:33:10.651 "results": [ 00:33:10.651 { 00:33:10.651 "job": "nvme0n1", 00:33:10.651 "core_mask": "0x2", 00:33:10.651 "workload": "randwrite", 00:33:10.651 "status": "finished", 00:33:10.651 "queue_depth": 16, 00:33:10.651 "io_size": 131072, 00:33:10.651 "runtime": 2.005184, 00:33:10.651 "iops": 5899.70795697552, 00:33:10.651 "mibps": 737.46349462194, 00:33:10.651 "io_failed": 0, 00:33:10.651 "io_timeout": 0, 00:33:10.651 "avg_latency_us": 2706.9317373908148, 00:33:10.651 "min_latency_us": 1153.7066666666667, 00:33:10.651 "max_latency_us": 14636.373333333333 00:33:10.651 } 00:33:10.651 ], 00:33:10.651 "core_count": 1 00:33:10.651 } 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:10.651 | select(.opcode=="crc32c") 00:33:10.651 | "\(.module_name) \(.executed)"' 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 546832 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 546832 ']' 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 546832 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 546832 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 546832' 00:33:10.651 killing process with pid 546832 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 546832 00:33:10.651 Received shutdown signal, test time was about 2.000000 seconds 00:33:10.651 00:33:10.651 Latency(us) 00:33:10.651 [2024-10-08T15:49:02.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.651 [2024-10-08T15:49:02.643Z] =================================================================================================================== 00:33:10.651 [2024-10-08T15:49:02.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.651 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 546832 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 544276 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 544276 ']' 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 544276 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544276 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544276' 00:33:10.912 killing process with pid 544276 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 544276 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 544276 00:33:10.912 00:33:10.912 real 0m16.831s 00:33:10.912 user 0m33.352s 00:33:10.912 sys 0m3.663s 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.912 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:10.912 ************************************ 00:33:10.912 END TEST nvmf_digest_clean 00:33:10.912 ************************************ 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:11.172 ************************************ 00:33:11.172 START TEST nvmf_digest_error 00:33:11.172 ************************************ 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=547548 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 547548 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 547548 ']' 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.172 17:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:11.172 [2024-10-08 17:49:03.019528] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:11.172 [2024-10-08 17:49:03.019579] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.172 [2024-10-08 17:49:03.105244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.172 [2024-10-08 17:49:03.163931] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.172 [2024-10-08 17:49:03.163965] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.172 [2024-10-08 17:49:03.163971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.172 [2024-10-08 17:49:03.163981] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.172 [2024-10-08 17:49:03.163985] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.172 [2024-10-08 17:49:03.164467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 [2024-10-08 17:49:03.850347] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.114 null0 00:33:12.114 [2024-10-08 17:49:03.928362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.114 [2024-10-08 17:49:03.952556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.114 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=547889 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 547889 /var/tmp/bperf.sock 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 547889 ']' 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:12.115 17:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.115 [2024-10-08 17:49:04.019270] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:12.115 [2024-10-08 17:49:04.019324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547889 ] 00:33:12.115 [2024-10-08 17:49:04.098283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.376 [2024-10-08 17:49:04.151824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.945 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:12.945 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:12.945 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:12.945 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.205 17:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.466 nvme0n1 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:13.466 17:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:13.466 Running I/O for 2 seconds... 00:33:13.466 [2024-10-08 17:49:05.396194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.396225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.396234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.406587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.406614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.415427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.415445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.415452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.424317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.424335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.424342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.432691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.432710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.432716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.442103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.442122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.442129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.466 [2024-10-08 17:49:05.451128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.466 [2024-10-08 17:49:05.451146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.466 [2024-10-08 17:49:05.451154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.461871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.461888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.461895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.471295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.471318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.471325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.480472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.480490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.480496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.489423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.489441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.489447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.499041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.499059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.499066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.507641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.507658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.515510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.515527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.515534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.525633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.525651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.534697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.534714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.534721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.542538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.542555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.542561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.553385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.553410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.565177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.565194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.574764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.574782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.574788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.582720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.727 [2024-10-08 17:49:05.582737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.727 [2024-10-08 17:49:05.582744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.727 [2024-10-08 17:49:05.592123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.592140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.592147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.601234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.610098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.610114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.610120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.619260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.619277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.619283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.628505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.628523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.628533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.637556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.637574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.637580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.646290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.646307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.646314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.655006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.655022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.655029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.664911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.664928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.664934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.672472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.672489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.672496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.683183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.683200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.683207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.692347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.692364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.692370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.700251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.700268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.700275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.728 [2024-10-08 17:49:05.709912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.728 [2024-10-08 17:49:05.709929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.728 [2024-10-08 17:49:05.709935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.720501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.720519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.729556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.729573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.729580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.738848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.738865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.738871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.746817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.746834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.746841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.755585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.755603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.755609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.764889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.764906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.764913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.774283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.774300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.781657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.781674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.781683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.792028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.792045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.792052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.800562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.800578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.800584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.810055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.810072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.810078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.818548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.818565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.818571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.826791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.826808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.826814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.838108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.838125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.838131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.846728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.989 [2024-10-08 17:49:05.846745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.989 [2024-10-08 17:49:05.846751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.989 [2024-10-08 17:49:05.856545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.856563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.856569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.865060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.865080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.873212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.873229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.873236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.882938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.882955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.882962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.892856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.892874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.892880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.900505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.900522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.900528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.910097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.910113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.910120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.918906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.918923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.918929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.928942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.928959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.928965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.937137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.937154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.937160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.945616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.945633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.954817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.954840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.964348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.964365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.973184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.973201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.973207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.990 [2024-10-08 17:49:05.980824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:13.990 [2024-10-08 17:49:05.980840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.990 [2024-10-08 17:49:05.980847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:05.990666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:05.990683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:05.990690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:05.999104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:05.999121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:05.999127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.008451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.008468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.008475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.017296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.017314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.017323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.026118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.026136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.026142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.034774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.034792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.034798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.043813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.043830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.043837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.052028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.052045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.052052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.061029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.061046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.061052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.069801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.069818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.069824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.079429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.079446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.079452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.087709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.087726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.087732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.096133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.096153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.096160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.251 [2024-10-08 17:49:06.105320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.251 [2024-10-08 17:49:06.105337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.251 [2024-10-08 17:49:06.105343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.114103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.114120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.114127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.123138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.123154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.123161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.131932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.131949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.131956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.141155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.141172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.141178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.149075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.149092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.149099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.157750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.157768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.157774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.167269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.167286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.175853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.175870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.184888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.184904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.184910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.194366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.194383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.194390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.203198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.203215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.203222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.210918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.210935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.219907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.219924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.219930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.229022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.229039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.229045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.252 [2024-10-08 17:49:06.238399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.252 [2024-10-08 17:49:06.238416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.252 [2024-10-08 17:49:06.238423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.247027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.247048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.247054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.256009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.256026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.256032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.265298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.265315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.265322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.273445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.273463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.273469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.282263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.282280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.282286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.291620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.291636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.300122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.300140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.300147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.308904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.308921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.308927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.318809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.318826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.318833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.328070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.328087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.328093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.336477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.336494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.336501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.345512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.345529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.345535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.354097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.354114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.354120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.362010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.362026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.362033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.371712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.371729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.371735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 28046.00 IOPS, 109.55 MiB/s [2024-10-08T15:49:06.504Z] [2024-10-08 17:49:06.381860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.381875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.381882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.389511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.389528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.389534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.399143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.399160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.399170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.407695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.407712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.407719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.512 [2024-10-08 17:49:06.416807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.512 [2024-10-08 17:49:06.416824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.512 [2024-10-08 17:49:06.416830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.425493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.425510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.425517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.435229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.435252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.444179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.444196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.444203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.452270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.452287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.452293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.461138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.461154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.461161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.470011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.470028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.470034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.478705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.478722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.478728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.487526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.487543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.487549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.513 [2024-10-08 17:49:06.496560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.513 [2024-10-08 17:49:06.496578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.513 [2024-10-08 17:49:06.496584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.505828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.505853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.515129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.515146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.515152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.524278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.524295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.524301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.532812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.532829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.532835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.540897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.540915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.540921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.549914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.549931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.549940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.559842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.559859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.559865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.568484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.568508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.577480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.577497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.577503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.586167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.586184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.586191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.595345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.595362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.595369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.604407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.604424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.604431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.613086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.613102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.613108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.622054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.622071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.622077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.630882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.630901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.630908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.639929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.639946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.639952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.648498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.648521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.660908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.660925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.660931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.668791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.668808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.668815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.678366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.678383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.678389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.688093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.688110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.688117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.697579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.697596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.697602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.705971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.705991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.705997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.714358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.714375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.714382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.723774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.773 [2024-10-08 17:49:06.723797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.773 [2024-10-08 17:49:06.732114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.773 [2024-10-08 17:49:06.732131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.774 [2024-10-08 17:49:06.732138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.774 [2024-10-08 17:49:06.740775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.774 [2024-10-08 17:49:06.740792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.774 [2024-10-08 17:49:06.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.774 [2024-10-08 17:49:06.750043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.774 [2024-10-08 17:49:06.750060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.774 [2024-10-08 17:49:06.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.774 [2024-10-08 17:49:06.759228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:14.774 [2024-10-08 17:49:06.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.774 [2024-10-08 17:49:06.759251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.768432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.768450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.768457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.777184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.777202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.786061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.786079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.786089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.795606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.795623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.795629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.803174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.803191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.803197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.812529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.812546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.812553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.821875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.821893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.821900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.830659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.830675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.830682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.839517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.839533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.839539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.035 [2024-10-08 17:49:06.848083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.035 [2024-10-08 17:49:06.848100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.035 [2024-10-08 17:49:06.848106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.857122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.857139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.866435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.866455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.866461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.874798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.874816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.874822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.884786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.884803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.884809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.893646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.893664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.893670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.902156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.902173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.902180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.911440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.911457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.920156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.920174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.920180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.928510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.928528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.928534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.938097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.938114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.938121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.947727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.947744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.947750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.956180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.965176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.965192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.965199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.973827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.973844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.973850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.983312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.983329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.983335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:06.991468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:06.991485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:06.991492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:07.000907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:07.000924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:07.000931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:07.009554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:07.009571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:07.009577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:07.018269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:07.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:07.018295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.036 [2024-10-08 17:49:07.026754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.036 [2024-10-08 17:49:07.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.036 [2024-10-08 17:49:07.026777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.035736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.035754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.035760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.045317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.045335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.045341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.053794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.053811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.053818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.062663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.062680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.062687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.071493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.071511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.071517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.080545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.080568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.089187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.089204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.089210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.097474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.097497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.106955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.106971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.106983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.115563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.115580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.115586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.124333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.124350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.124356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.132991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.133007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.133014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.141904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.141922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.141928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.149844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.149860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.149867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.159162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.159186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.168483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.298 [2024-10-08 17:49:07.168500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.298 [2024-10-08 17:49:07.168509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.298 [2024-10-08 17:49:07.176793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.176811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.176817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.186474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.186491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.194841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.194858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.203443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.203466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.212287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.212304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.212311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.220897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.220914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.220921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.230273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.230291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.230297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.239043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.239059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.239066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.247947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.247966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.247972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.257287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.257304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.257310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.265743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.265760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.265766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.274299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.274322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.299 [2024-10-08 17:49:07.283624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.299 [2024-10-08 17:49:07.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.299 [2024-10-08 17:49:07.283648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.560 [2024-10-08 17:49:07.293807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.560 [2024-10-08 17:49:07.293825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.560 [2024-10-08 17:49:07.293831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.560 [2024-10-08 17:49:07.302795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.560 [2024-10-08 17:49:07.302812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.560 [2024-10-08 17:49:07.302819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.560 [2024-10-08 17:49:07.311586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.560 [2024-10-08 17:49:07.311603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.560 [2024-10-08 17:49:07.311610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.560 [2024-10-08 17:49:07.319983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.560 [2024-10-08 17:49:07.320001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.320007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.328951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.328969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.328980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.337831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.337847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.337854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.347587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.347605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.347611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.354597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.354614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.354620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.364288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.364305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.364312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 [2024-10-08 17:49:07.373423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.373448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 28287.50 IOPS, 110.50 MiB/s [2024-10-08T15:49:07.553Z] [2024-10-08 17:49:07.384864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17639c0) 00:33:15.561 [2024-10-08 17:49:07.384881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.561 [2024-10-08 17:49:07.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.561 00:33:15.561 Latency(us) 00:33:15.561 [2024-10-08T15:49:07.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:15.561 nvme0n1 : 2.04 27743.65 108.37 0.00 0.00 4519.84 2553.17 48059.73 00:33:15.561 [2024-10-08T15:49:07.553Z] =================================================================================================================== 00:33:15.561 [2024-10-08T15:49:07.553Z] Total : 27743.65 108.37 0.00 0.00 4519.84 2553.17 48059.73 00:33:15.561 { 00:33:15.561 "results": [ 00:33:15.561 { 00:33:15.561 "job": "nvme0n1", 00:33:15.561 "core_mask": "0x2", 00:33:15.561 "workload": "randread", 00:33:15.561 "status": "finished", 00:33:15.561 "queue_depth": 128, 00:33:15.561 "io_size": 4096, 00:33:15.561 "runtime": 2.043819, 00:33:15.561 "iops": 27743.65048959815, 00:33:15.561 "mibps": 108.37363472499277, 00:33:15.561 "io_failed": 0, 00:33:15.561 "io_timeout": 0, 00:33:15.561 "avg_latency_us": 4519.842297820809, 00:33:15.561 "min_latency_us": 2553.173333333333, 00:33:15.561 "max_latency_us": 48059.73333333333 00:33:15.561 } 00:33:15.561 ], 00:33:15.561 "core_count": 1 00:33:15.561 } 00:33:15.561 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:15.561 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:15.561 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:15.561 | .driver_specific 00:33:15.561 | .nvme_error 00:33:15.561 | .status_code 00:33:15.561 | .command_transient_transport_error' 00:33:15.561 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:15.821 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:33:15.821 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 547889 00:33:15.821 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 547889 ']' 00:33:15.821 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 547889 00:33:15.821 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 547889 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 547889' 00:33:15.822 killing process with pid 547889 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 547889 00:33:15.822 Received shutdown signal, test time was about 2.000000 seconds 00:33:15.822 00:33:15.822 Latency(us) 00:33:15.822 [2024-10-08T15:49:07.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.822 [2024-10-08T15:49:07.814Z] =================================================================================================================== 00:33:15.822 [2024-10-08T15:49:07.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 547889 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=548577 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 548577 /var/tmp/bperf.sock 00:33:15.822 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 548577 ']' 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:16.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.083 17:49:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.083 [2024-10-08 17:49:07.861977] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:16.083 [2024-10-08 17:49:07.862037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548577 ] 00:33:16.083 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.083 Zero copy mechanism will not be used. 00:33:16.083 [2024-10-08 17:49:07.938130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.083 [2024-10-08 17:49:07.991626] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.023 17:49:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.283 nvme0n1 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:17.283 17:49:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:17.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:17.545 Zero copy mechanism will not be used. 00:33:17.545 Running I/O for 2 seconds... 00:33:17.545 [2024-10-08 17:49:09.334238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.334276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.334285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.346481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.346510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.358925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.358944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.358951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.371611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.371629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.371636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.382822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.382839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.394995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.395012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.395018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.406722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.406739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.406745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.418284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.418301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.418307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.432182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.432199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.432205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.438995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.439012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.439019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.444862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.444878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.444885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.454915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.454932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.454939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.463174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.463191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.463198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.468000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.468018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.468025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.472932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.472949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.472956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.478637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.478654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.478661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.483052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.483069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.483075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.490111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.490128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.497047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.497064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.497071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.504535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.504552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.504559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.511355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.511373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.511380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.523430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.523447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.523454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.545 [2024-10-08 17:49:09.533547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.545 [2024-10-08 17:49:09.533564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.545 [2024-10-08 17:49:09.533571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.541892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.541910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.549783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.549800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.549806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.560864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.560881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.560887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.565984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.566004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.566010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.574188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.574211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.582680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.582697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.582704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.587063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.587080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.587087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.594072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.594089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.594095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.598852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.598869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.598876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.807 [2024-10-08 17:49:09.605742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.807 [2024-10-08 17:49:09.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.807 [2024-10-08 17:49:09.605764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.613979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.613996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.614003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.619984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.620001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.620008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.624306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.624323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.624329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.633535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.633553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.633559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.640166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.640183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.640190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.650502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.650519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.660072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.664671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.664688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.664694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.669121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.669139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.669145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.676426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.676443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.676449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.683857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.683875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.683884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.691137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.691155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.697433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.697451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.697457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.705147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.705164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.715687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.715705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.715711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.724649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.724666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.724673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.731940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.731957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.740013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.740030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.740037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.746036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.746054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.746060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.751581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.751602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.751608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.755881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.755898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.755905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.762001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.762018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.762024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.770506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.770524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.770530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.775767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.775784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.775790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.780130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.780147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.780153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.784844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.784862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.784868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.789145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.789162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.789169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.808 [2024-10-08 17:49:09.794954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:17.808 [2024-10-08 17:49:09.794971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.808 [2024-10-08 17:49:09.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.802148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.802165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.802172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.808819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.808836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.808843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.813070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.813087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.813093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.821031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.821048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.821055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.832762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.832780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.832787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.841617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.841634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.841641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.849635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.857993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.858010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.858017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.869310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.869328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.070 [2024-10-08 17:49:09.869337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.070 [2024-10-08 17:49:09.878393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.070 [2024-10-08 17:49:09.878410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.885846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.885863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.885870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.896791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.896814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.908698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.908715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.908722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.919270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.919288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.923583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.923600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.923607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.927773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.927791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.927797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.936954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.936971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.936983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.945582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.945600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.945606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.955254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.955271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.967192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.967210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.967216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.979282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.979299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.979305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:09.989515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:09.989533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:09.989540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.001112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.001136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.013310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.013335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.025221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.025239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.025245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.037225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.037244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.037257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.047182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.047199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.047206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.071 [2024-10-08 17:49:10.056906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.071 [2024-10-08 17:49:10.056924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.071 [2024-10-08 17:49:10.056930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.066477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.066495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.332 [2024-10-08 17:49:10.066502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.077687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.077705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.332 [2024-10-08 17:49:10.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.089183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.089201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.332 [2024-10-08 17:49:10.089208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.100901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.100919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.332 [2024-10-08 17:49:10.100925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.113059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.332 [2024-10-08 17:49:10.113083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.332 [2024-10-08 17:49:10.125195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.332 [2024-10-08 17:49:10.125212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.125219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.137132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.137153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.137159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.149405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.149423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.149430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.161418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.161435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.161442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.173744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.173761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.173768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.186400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.186417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.186423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.199124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.199147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.209773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.209790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.209796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.220287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.220305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.220312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.232189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.232206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.232213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.243927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.243945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.243951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.256216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.256235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.256241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.268550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.268569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.268575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.280366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.280384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.280390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.292594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.292612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.292619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.333 [2024-10-08 17:49:10.304872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.304889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.304895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.333 3441.00 IOPS, 430.12 MiB/s [2024-10-08T15:49:10.325Z] [2024-10-08 17:49:10.317886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.333 [2024-10-08 17:49:10.317905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.333 [2024-10-08 17:49:10.317912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.330126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.330144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.330150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.339004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.339022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.348252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.348270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.348276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.357819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.357837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.369253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.369270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.369276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.380221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.380239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.380245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.391077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.391093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.391100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.402243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.402260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.402266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.413026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.413043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.413050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.422152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.422169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.422175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.429772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.429793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.429800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.440419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.440437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.440443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.451055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.451073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.451079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.460475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.460492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.470368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.470385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.470391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.479881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.479899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.479905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.490822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.490840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.490846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.501821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.501837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.501844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.512139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.512156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.512163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.523896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.523914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.523920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.536638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.536656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.536662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.548813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.548830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.548836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.555311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.555329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.555335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.565128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.565145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.565151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.575373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.575390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.575396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.595 [2024-10-08 17:49:10.585957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.595 [2024-10-08 17:49:10.585978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.595 [2024-10-08 17:49:10.585985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.596367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.596384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.596391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.603429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.603446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.603456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.612118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.612135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.612141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.621234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.621252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.621258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.632324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.632342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.632348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.643943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.643961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.643968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.654863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.654881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.654888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.666908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.666926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.677632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.677649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.677656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.689057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.689074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.696802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.696825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.706770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.706787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.706794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.714762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.714780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.726399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.726415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.736979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.736997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.737003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.747312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.747329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.747336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.756069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.756086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.756092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.766371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.766395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.778175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.778192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.778202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.788140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.788157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.788164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.797913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.797931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.808480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.808498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.808504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.816270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.816294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.826912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.826930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.826936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.836464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.836481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.836488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.858 [2024-10-08 17:49:10.844993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:18.858 [2024-10-08 17:49:10.845011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-10-08 17:49:10.845017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.854294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.854319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.863927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.863951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.863958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.869364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.869382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.878645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.878662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.878669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.881336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.881354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.881360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.891399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.891422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.900077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.900094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.900100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.910368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.910385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.910391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.919789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.919807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.919813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.929665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.929683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.929689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.939985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.940002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.940008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.951895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.951912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.963349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.963366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.963372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.975844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.975861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.975868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.988007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.988024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.988031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:10.995566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:10.995582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:10.995588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:11.000520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:11.000538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:11.000544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.120 [2024-10-08 17:49:11.012033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.120 [2024-10-08 17:49:11.012051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.120 [2024-10-08 17:49:11.012057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.023909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.023925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.023935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.033787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.033805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.033811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.044839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.044856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.044862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.055938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.055956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.055962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.067139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.067156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.067163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.077055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.077079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.077085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.085916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.085934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.085940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.093950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.093967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.093978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.121 [2024-10-08 17:49:11.105335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.121 [2024-10-08 17:49:11.105353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.121 [2024-10-08 17:49:11.105359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.116358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.116378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.116384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.127315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.127332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.127339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.138624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.138642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.138648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.150840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.150857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.150863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.161834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.161852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.161859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.172868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.172886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.172892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.183683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.183701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.183707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.192358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.192375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.192382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.202915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.202933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.202939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.214144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.214162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.214169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.224986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.225003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.225010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.233843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.233860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.243856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.243873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.243880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.255913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.255930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.255936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.265225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.265249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.275347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.275364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.275370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.282677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.282694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.382 [2024-10-08 17:49:11.282701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.382 [2024-10-08 17:49:11.292811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.382 [2024-10-08 17:49:11.292828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.383 [2024-10-08 17:49:11.292838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.383 [2024-10-08 17:49:11.303359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.383 [2024-10-08 17:49:11.303376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.383 [2024-10-08 17:49:11.303382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.383 [2024-10-08 17:49:11.313704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed8b0) 00:33:19.383 [2024-10-08 17:49:11.313722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.383 [2024-10-08 17:49:11.313728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.383 3257.50 IOPS, 407.19 MiB/s 00:33:19.383 Latency(us) 00:33:19.383 [2024-10-08T15:49:11.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.383 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:19.383 nvme0n1 : 2.00 3261.96 407.75 0.00 0.00 4902.42 477.87 14964.05 00:33:19.383 [2024-10-08T15:49:11.375Z] =================================================================================================================== 00:33:19.383 [2024-10-08T15:49:11.375Z] Total : 3261.96 407.75 0.00 0.00 4902.42 477.87 14964.05 00:33:19.383 { 00:33:19.383 "results": [ 00:33:19.383 { 00:33:19.383 "job": "nvme0n1", 00:33:19.383 "core_mask": "0x2", 00:33:19.383 "workload": "randread", 00:33:19.383 "status": "finished", 00:33:19.383 "queue_depth": 16, 00:33:19.383 "io_size": 131072, 00:33:19.383 "runtime": 2.00217, 00:33:19.383 "iops": 3261.9607725617707, 00:33:19.383 "mibps": 407.74509657022134, 00:33:19.383 "io_failed": 0, 00:33:19.383 "io_timeout": 0, 00:33:19.383 "avg_latency_us": 4902.415548410147, 00:33:19.383 "min_latency_us": 477.8666666666667, 00:33:19.383 "max_latency_us": 14964.053333333333 00:33:19.383 } 00:33:19.383 ], 00:33:19.383 "core_count": 1 00:33:19.383 } 00:33:19.383 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:19.383 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:19.383 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:19.383 | .driver_specific 00:33:19.383 | .nvme_error 00:33:19.383 | .status_code 00:33:19.383 | .command_transient_transport_error' 00:33:19.383 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 548577 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 548577 ']' 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 548577 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:19.643 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 548577 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 548577' 00:33:19.644 killing process with pid 548577 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 548577 00:33:19.644 Received shutdown signal, test time was about 2.000000 seconds 00:33:19.644 00:33:19.644 Latency(us) 00:33:19.644 [2024-10-08T15:49:11.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.644 [2024-10-08T15:49:11.636Z] =================================================================================================================== 00:33:19.644 [2024-10-08T15:49:11.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.644 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 548577 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=549263 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 549263 /var/tmp/bperf.sock 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 549263 ']' 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.904 17:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.904 [2024-10-08 17:49:11.770835] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:19.904 [2024-10-08 17:49:11.770888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549263 ] 00:33:19.904 [2024-10-08 17:49:11.848801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.165 [2024-10-08 17:49:11.899197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.737 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:20.737 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:20.737 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:20.737 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.997 17:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.258 nvme0n1 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:21.258 17:49:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.258 Running I/O for 2 seconds... 00:33:21.520 [2024-10-08 17:49:13.253950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f9f68 00:33:21.520 [2024-10-08 17:49:13.254672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.263414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ef270 00:33:21.520 [2024-10-08 17:49:13.264044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.264063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.271782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e84c0 00:33:21.520 [2024-10-08 17:49:13.272504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.272521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.280245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2d80 00:33:21.520 [2024-10-08 17:49:13.280977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.280994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.288702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e4140 00:33:21.520 [2024-10-08 17:49:13.289435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.289452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.297174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ef270 00:33:21.520 [2024-10-08 17:49:13.297898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.297919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.305606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e84c0 00:33:21.520 [2024-10-08 17:49:13.306343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.306359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.314066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2d80 00:33:21.520 [2024-10-08 17:49:13.314776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.314792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.322478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e4140 00:33:21.520 [2024-10-08 17:49:13.323182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.323198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.330919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ef270 00:33:21.520 [2024-10-08 17:49:13.331657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.331674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.339351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e84c0 00:33:21.520 [2024-10-08 17:49:13.340069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.340085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.347772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2d80 00:33:21.520 [2024-10-08 17:49:13.348466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.348482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.356181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e4140 00:33:21.520 [2024-10-08 17:49:13.356890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.356906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.364407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0350 00:33:21.520 [2024-10-08 17:49:13.365295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.365312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.373145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fa7d8 00:33:21.520 [2024-10-08 17:49:13.374010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.381567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ebfd0 00:33:21.520 [2024-10-08 17:49:13.382436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.390008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f8e88 00:33:21.520 [2024-10-08 17:49:13.390866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.390882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.398457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ed920 00:33:21.520 [2024-10-08 17:49:13.399324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.399340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.406879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fa7d8 00:33:21.520 [2024-10-08 17:49:13.407753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.407768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.414670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e5220 00:33:21.520 [2024-10-08 17:49:13.415456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.415471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.424076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fa3a0 00:33:21.520 [2024-10-08 17:49:13.425044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.425060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.432631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f9f68 00:33:21.520 [2024-10-08 17:49:13.433722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.520 [2024-10-08 17:49:13.433738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:21.520 [2024-10-08 17:49:13.441926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198df988 00:33:21.521 [2024-10-08 17:49:13.443090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.443106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.448848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f35f0 00:33:21.521 [2024-10-08 17:49:13.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.449619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.457273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e9e10 00:33:21.521 [2024-10-08 17:49:13.458027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.458042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.465672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e27f0 00:33:21.521 [2024-10-08 17:49:13.466416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.466431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.474101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e1710 00:33:21.521 [2024-10-08 17:49:13.474851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.474867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.482508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198de470 00:33:21.521 [2024-10-08 17:49:13.483245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.483261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.490915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e95a0 00:33:21.521 [2024-10-08 17:49:13.491670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.491685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.499579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1430 00:33:21.521 [2024-10-08 17:49:13.500322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.500338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.521 [2024-10-08 17:49:13.507996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0350 00:33:21.521 [2024-10-08 17:49:13.508746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.521 [2024-10-08 17:49:13.508762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.516401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ef270 00:33:21.782 [2024-10-08 17:49:13.517159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.517178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.524803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ee190 00:33:21.782 [2024-10-08 17:49:13.525561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.525577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.533229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ff3c8 00:33:21.782 [2024-10-08 17:49:13.533978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.533994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.541657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fda78 00:33:21.782 [2024-10-08 17:49:13.542404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.542419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.550060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fd208 00:33:21.782 [2024-10-08 17:49:13.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.550823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.558467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f31b8 00:33:21.782 [2024-10-08 17:49:13.559224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.559239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.566865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f8a50 00:33:21.782 [2024-10-08 17:49:13.567619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.567635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.575287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f9b30 00:33:21.782 [2024-10-08 17:49:13.576020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.576036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.583706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:21.782 [2024-10-08 17:49:13.584457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.584473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.592127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaab8 00:33:21.782 [2024-10-08 17:49:13.592822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.592838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.600528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:21.782 [2024-10-08 17:49:13.601269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.601285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.608929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e23b8 00:33:21.782 [2024-10-08 17:49:13.609682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.609697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.617348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e12d8 00:33:21.782 [2024-10-08 17:49:13.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.618109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.625825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198de038 00:33:21.782 [2024-10-08 17:49:13.626524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.634243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f20d8 00:33:21.782 [2024-10-08 17:49:13.634989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.642646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0ff8 00:33:21.782 [2024-10-08 17:49:13.643389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.643405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.651050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eff18 00:33:21.782 [2024-10-08 17:49:13.651784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.651799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.659451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eee38 00:33:21.782 [2024-10-08 17:49:13.660197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.782 [2024-10-08 17:49:13.660212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.782 [2024-10-08 17:49:13.667859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f7970 00:33:21.782 [2024-10-08 17:49:13.668606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.668622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.676281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fef90 00:33:21.783 [2024-10-08 17:49:13.677029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.677045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.684701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fdeb0 00:33:21.783 [2024-10-08 17:49:13.685507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.685523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.693170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e4140 00:33:21.783 [2024-10-08 17:49:13.693901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.693917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.701564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f8618 00:33:21.783 [2024-10-08 17:49:13.702322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.702338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.709428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f6458 00:33:21.783 [2024-10-08 17:49:13.710155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.710170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.718878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198edd58 00:33:21.783 [2024-10-08 17:49:13.719702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.719718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.727303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e88f8 00:33:21.783 [2024-10-08 17:49:13.728171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.728186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.735701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fe720 00:33:21.783 [2024-10-08 17:49:13.736564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.736583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.744124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e01f8 00:33:21.783 [2024-10-08 17:49:13.744994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.745010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.752537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eea00 00:33:21.783 [2024-10-08 17:49:13.753415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.753431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.760965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198efae0 00:33:21.783 [2024-10-08 17:49:13.761822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.761839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.783 [2024-10-08 17:49:13.769392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:21.783 [2024-10-08 17:49:13.770242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.783 [2024-10-08 17:49:13.770258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.777810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ebfd0 00:33:22.044 [2024-10-08 17:49:13.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.786228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.044 [2024-10-08 17:49:13.787080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.787096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.794631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f4298 00:33:22.044 [2024-10-08 17:49:13.795497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.795513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.803049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2d80 00:33:22.044 [2024-10-08 17:49:13.803926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.803942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.811483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e6fa8 00:33:22.044 [2024-10-08 17:49:13.812346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.812362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.819921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f4f40 00:33:22.044 [2024-10-08 17:49:13.820736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.820752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.828333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f6020 00:33:22.044 [2024-10-08 17:49:13.829219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.829236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.836740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e8088 00:33:22.044 [2024-10-08 17:49:13.837568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.845586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.044 [2024-10-08 17:49:13.846273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.846289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.853885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e9e10 00:33:22.044 [2024-10-08 17:49:13.854502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.854519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.862325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.044 [2024-10-08 17:49:13.862916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.870747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e9e10 00:33:22.044 [2024-10-08 17:49:13.871474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.879438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f6cc8 00:33:22.044 [2024-10-08 17:49:13.880410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.044 [2024-10-08 17:49:13.880425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.044 [2024-10-08 17:49:13.887856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0a68 00:33:22.045 [2024-10-08 17:49:13.888828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.888843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.896281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f7970 00:33:22.045 [2024-10-08 17:49:13.897251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.897266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.904715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fef90 00:33:22.045 [2024-10-08 17:49:13.905669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.905684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.913168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e8d30 00:33:22.045 [2024-10-08 17:49:13.914093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.914109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.921566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fe2e8 00:33:22.045 [2024-10-08 17:49:13.922380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.922396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.930269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e38d0 00:33:22.045 [2024-10-08 17:49:13.931336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.931351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.938840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e6300 00:33:22.045 [2024-10-08 17:49:13.939927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.939944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.947282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0350 00:33:22.045 [2024-10-08 17:49:13.948368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.948384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.955712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1430 00:33:22.045 [2024-10-08 17:49:13.956784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.956803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.964129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eb760 00:33:22.045 [2024-10-08 17:49:13.965203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.965219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.972554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fb8b8 00:33:22.045 [2024-10-08 17:49:13.973628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.973644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.980959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2510 00:33:22.045 [2024-10-08 17:49:13.982039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.989408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e95a0 00:33:22.045 [2024-10-08 17:49:13.990500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.990516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:13.997832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198de470 00:33:22.045 [2024-10-08 17:49:13.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:13.998932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:14.006275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e1710 00:33:22.045 [2024-10-08 17:49:14.007367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:14.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:14.014688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f31b8 00:33:22.045 [2024-10-08 17:49:14.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:14.015791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:14.023098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fd208 00:33:22.045 [2024-10-08 17:49:14.024226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:14.024242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.045 [2024-10-08 17:49:14.031596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e2c28 00:33:22.045 [2024-10-08 17:49:14.032673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.045 [2024-10-08 17:49:14.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.040042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ea248 00:33:22.306 [2024-10-08 17:49:14.041109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.041126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.048476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f3a28 00:33:22.306 [2024-10-08 17:49:14.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.049579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.056897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f6020 00:33:22.306 [2024-10-08 17:49:14.057986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.058002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.065301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f4f40 00:33:22.306 [2024-10-08 17:49:14.066392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.066408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.073727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e6fa8 00:33:22.306 [2024-10-08 17:49:14.074799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.074816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.082172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f2d80 00:33:22.306 [2024-10-08 17:49:14.083258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.083274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.090600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0ff8 00:33:22.306 [2024-10-08 17:49:14.091632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.091648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.099024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ec408 00:33:22.306 [2024-10-08 17:49:14.100093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.100109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.107436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eb328 00:33:22.306 [2024-10-08 17:49:14.108535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.108551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.115872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f3e60 00:33:22.306 [2024-10-08 17:49:14.116941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.116958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.124323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f20d8 00:33:22.306 [2024-10-08 17:49:14.125420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.125436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.132773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198de038 00:33:22.306 [2024-10-08 17:49:14.133866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.133882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.141206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e12d8 00:33:22.306 [2024-10-08 17:49:14.142240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.142256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.149613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198feb58 00:33:22.306 [2024-10-08 17:49:14.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.150719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.158034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e4140 00:33:22.306 [2024-10-08 17:49:14.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.159115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.166468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e1f80 00:33:22.306 [2024-10-08 17:49:14.167538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.167554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.174935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3060 00:33:22.306 [2024-10-08 17:49:14.175969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.175996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.183365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198ea680 00:33:22.306 [2024-10-08 17:49:14.184438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.184454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.191777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fb048 00:33:22.306 [2024-10-08 17:49:14.192866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.192882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.200184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f5378 00:33:22.306 [2024-10-08 17:49:14.201263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.201279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.208604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e38d0 00:33:22.306 [2024-10-08 17:49:14.209654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.209670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.217036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e6300 00:33:22.306 [2024-10-08 17:49:14.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.306 [2024-10-08 17:49:14.218135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.306 [2024-10-08 17:49:14.225464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0350 00:33:22.306 [2024-10-08 17:49:14.226534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.226550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.233902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1430 00:33:22.307 [2024-10-08 17:49:14.234988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.235005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:22.307 29956.00 IOPS, 117.02 MiB/s [2024-10-08T15:49:14.299Z] [2024-10-08 17:49:14.242317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.307 [2024-10-08 17:49:14.243392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.243408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.250738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.307 [2024-10-08 17:49:14.251828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.251846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.259159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.307 [2024-10-08 17:49:14.260221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.260237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.267579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.307 [2024-10-08 17:49:14.268659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.268676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.276029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.307 [2024-10-08 17:49:14.277058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.277075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.284443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.307 [2024-10-08 17:49:14.285525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.307 [2024-10-08 17:49:14.292853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.307 [2024-10-08 17:49:14.293927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.307 [2024-10-08 17:49:14.293943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.301280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.568 [2024-10-08 17:49:14.302327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.302343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.309715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.568 [2024-10-08 17:49:14.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.310826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.318148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.568 [2024-10-08 17:49:14.319214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.319230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.326584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.568 [2024-10-08 17:49:14.327658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.327675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.335013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.568 [2024-10-08 17:49:14.336096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.336112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.343429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.568 [2024-10-08 17:49:14.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.351840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.568 [2024-10-08 17:49:14.352925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.352942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.360270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.568 [2024-10-08 17:49:14.361308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.361324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.368685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.568 [2024-10-08 17:49:14.369774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.369790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.377095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.568 [2024-10-08 17:49:14.378152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.378169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.385495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.568 [2024-10-08 17:49:14.386543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.386560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.394064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.568 [2024-10-08 17:49:14.395159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.395175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.402498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.568 [2024-10-08 17:49:14.403548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.403564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.410938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.568 [2024-10-08 17:49:14.412017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.412033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.419369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.568 [2024-10-08 17:49:14.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.420477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.427784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.568 [2024-10-08 17:49:14.428860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.428876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.436192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.568 [2024-10-08 17:49:14.437273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.437289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.444611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.568 [2024-10-08 17:49:14.445681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.445697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.453026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.568 [2024-10-08 17:49:14.454082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.454098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.461469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.568 [2024-10-08 17:49:14.462560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.462576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.469874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.568 [2024-10-08 17:49:14.470948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.470967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.478292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.568 [2024-10-08 17:49:14.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.479382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.486694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.568 [2024-10-08 17:49:14.487788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.487803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.495124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.568 [2024-10-08 17:49:14.496182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.496198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.503548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.568 [2024-10-08 17:49:14.504635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.504651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.512139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.568 [2024-10-08 17:49:14.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.513235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.520566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.568 [2024-10-08 17:49:14.521646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.521662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.528992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.568 [2024-10-08 17:49:14.530060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.568 [2024-10-08 17:49:14.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.568 [2024-10-08 17:49:14.537401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.569 [2024-10-08 17:49:14.538483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-10-08 17:49:14.538499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.569 [2024-10-08 17:49:14.545824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.569 [2024-10-08 17:49:14.546902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-10-08 17:49:14.546918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.569 [2024-10-08 17:49:14.554264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.569 [2024-10-08 17:49:14.555342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-10-08 17:49:14.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.562682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.830 [2024-10-08 17:49:14.563749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.563765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.571086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.830 [2024-10-08 17:49:14.572160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.572176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.579485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.830 [2024-10-08 17:49:14.580562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.580578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.587908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.830 [2024-10-08 17:49:14.588995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.589011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.596332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.830 [2024-10-08 17:49:14.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.597422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.604728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.830 [2024-10-08 17:49:14.605802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.605818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.613132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.830 [2024-10-08 17:49:14.614208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.614224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.621545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.830 [2024-10-08 17:49:14.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.629962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.830 [2024-10-08 17:49:14.631068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.631084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.638384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.830 [2024-10-08 17:49:14.639457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.639473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.646807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.830 [2024-10-08 17:49:14.647898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.647913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.655203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.830 [2024-10-08 17:49:14.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.656290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.663602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.830 [2024-10-08 17:49:14.664630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.664646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.671999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.830 [2024-10-08 17:49:14.672934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.680410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.830 [2024-10-08 17:49:14.681485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.681500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.688832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.830 [2024-10-08 17:49:14.689920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.689939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.697256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.830 [2024-10-08 17:49:14.698329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.698345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.705654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.830 [2024-10-08 17:49:14.706725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.706740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.714061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.830 [2024-10-08 17:49:14.715138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.715154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.722518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.830 [2024-10-08 17:49:14.723608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.723624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.730963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.830 [2024-10-08 17:49:14.732041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.732057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.739384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.830 [2024-10-08 17:49:14.740471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.740487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.747782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.830 [2024-10-08 17:49:14.748853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.748869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.756187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:22.830 [2024-10-08 17:49:14.757220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.757236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.764602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:22.830 [2024-10-08 17:49:14.765692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.765710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.773024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:22.830 [2024-10-08 17:49:14.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.774109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.781453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:22.830 [2024-10-08 17:49:14.782545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.830 [2024-10-08 17:49:14.782561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.830 [2024-10-08 17:49:14.789853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:22.830 [2024-10-08 17:49:14.790934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.831 [2024-10-08 17:49:14.790950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.831 [2024-10-08 17:49:14.798270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:22.831 [2024-10-08 17:49:14.799356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.831 [2024-10-08 17:49:14.799372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.831 [2024-10-08 17:49:14.806669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:22.831 [2024-10-08 17:49:14.807744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.831 [2024-10-08 17:49:14.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:22.831 [2024-10-08 17:49:14.815092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:22.831 [2024-10-08 17:49:14.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.831 [2024-10-08 17:49:14.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.823531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.092 [2024-10-08 17:49:14.824607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.824622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.831960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.092 [2024-10-08 17:49:14.833036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.833052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.840365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.092 [2024-10-08 17:49:14.841462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.841478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.848764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.092 [2024-10-08 17:49:14.849848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.857185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.092 [2024-10-08 17:49:14.858263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.858278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.865608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.092 [2024-10-08 17:49:14.866676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.866692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.874023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.092 [2024-10-08 17:49:14.875091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.875107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.882420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.092 [2024-10-08 17:49:14.883508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.883524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.890817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.092 [2024-10-08 17:49:14.891907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.891923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.899240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.092 [2024-10-08 17:49:14.900275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.900290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.907645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.092 [2024-10-08 17:49:14.908716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.908731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.916067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.092 [2024-10-08 17:49:14.917153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.917168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.924490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.092 [2024-10-08 17:49:14.925568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.925584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.932907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.092 [2024-10-08 17:49:14.933985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.934000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.941303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.092 [2024-10-08 17:49:14.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.942392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.949717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.092 [2024-10-08 17:49:14.950779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.950795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.958128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.092 [2024-10-08 17:49:14.959193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.959209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.966550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.092 [2024-10-08 17:49:14.967627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.967643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.974950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.092 [2024-10-08 17:49:14.976025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.976041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.983349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.092 [2024-10-08 17:49:14.984419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.984440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:14.991754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.092 [2024-10-08 17:49:14.992845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.092 [2024-10-08 17:49:14.992861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.092 [2024-10-08 17:49:15.000202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.092 [2024-10-08 17:49:15.001279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.001294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.008629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.093 [2024-10-08 17:49:15.009703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.009719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.017046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.093 [2024-10-08 17:49:15.018111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.018126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.025447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.093 [2024-10-08 17:49:15.026514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.026530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.033921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.093 [2024-10-08 17:49:15.035011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.035026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.042331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.093 [2024-10-08 17:49:15.043401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.043416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.050764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.093 [2024-10-08 17:49:15.051850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.051866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.059186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.093 [2024-10-08 17:49:15.060247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.060262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.067595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.093 [2024-10-08 17:49:15.068663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.068679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.093 [2024-10-08 17:49:15.076020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.093 [2024-10-08 17:49:15.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.093 [2024-10-08 17:49:15.077122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.084433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.354 [2024-10-08 17:49:15.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.085519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.092857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.354 [2024-10-08 17:49:15.093960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.093979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.101324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.354 [2024-10-08 17:49:15.102394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.102410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.109731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.354 [2024-10-08 17:49:15.110809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.110825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.118156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.354 [2024-10-08 17:49:15.119238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.119253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.126567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.354 [2024-10-08 17:49:15.127659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.135001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.354 [2024-10-08 17:49:15.136082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.136098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.143417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.354 [2024-10-08 17:49:15.144497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.144513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.151848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.354 [2024-10-08 17:49:15.152933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.152949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.160248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.354 [2024-10-08 17:49:15.161341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.161356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.168651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.354 [2024-10-08 17:49:15.169724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.177057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f0bc0 00:33:23.354 [2024-10-08 17:49:15.178140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.185483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198eaef0 00:33:23.354 [2024-10-08 17:49:15.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.186537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.193912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198f1ca0 00:33:23.354 [2024-10-08 17:49:15.195004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.354 [2024-10-08 17:49:15.195019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.354 [2024-10-08 17:49:15.202342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e0ea0 00:33:23.355 [2024-10-08 17:49:15.203386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.355 [2024-10-08 17:49:15.203404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.355 [2024-10-08 17:49:15.210752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3d08 00:33:23.355 [2024-10-08 17:49:15.211826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.355 [2024-10-08 17:49:15.211842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.355 [2024-10-08 17:49:15.219163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e99d8 00:33:23.355 [2024-10-08 17:49:15.220256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.355 [2024-10-08 17:49:15.220272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.355 [2024-10-08 17:49:15.227591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198fac10 00:33:23.355 [2024-10-08 17:49:15.228681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.355 [2024-10-08 17:49:15.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.355 [2024-10-08 17:49:15.236036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2ca80) with pdu=0x2000198e3498 00:33:23.355 [2024-10-08 17:49:15.237103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.355 [2024-10-08 17:49:15.237119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.355 30150.00 IOPS, 117.77 MiB/s 00:33:23.355 Latency(us) 00:33:23.355 [2024-10-08T15:49:15.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.355 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.355 nvme0n1 : 2.00 30164.73 117.83 0.00 0.00 4238.33 1686.19 16056.32 00:33:23.355 [2024-10-08T15:49:15.347Z] =================================================================================================================== 00:33:23.355 [2024-10-08T15:49:15.347Z] Total : 30164.73 117.83 0.00 0.00 4238.33 1686.19 16056.32 00:33:23.355 { 00:33:23.355 "results": [ 00:33:23.355 { 00:33:23.355 "job": "nvme0n1", 00:33:23.355 "core_mask": "0x2", 00:33:23.355 "workload": "randwrite", 00:33:23.355 "status": "finished", 00:33:23.355 "queue_depth": 128, 00:33:23.355 "io_size": 4096, 00:33:23.355 "runtime": 2.003267, 00:33:23.355 "iops": 30164.725920209337, 00:33:23.355 "mibps": 117.83096062581772, 00:33:23.355 "io_failed": 0, 00:33:23.355 "io_timeout": 0, 00:33:23.355 "avg_latency_us": 4238.332520465127, 00:33:23.355 "min_latency_us": 1686.1866666666667, 00:33:23.355 "max_latency_us": 16056.32 00:33:23.355 } 00:33:23.355 ], 00:33:23.355 "core_count": 1 00:33:23.355 } 00:33:23.355 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:23.355 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:23.355 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:23.355 | .driver_specific 00:33:23.355 | .nvme_error 00:33:23.355 | .status_code 00:33:23.355 | .command_transient_transport_error' 00:33:23.355 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 549263 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 549263 ']' 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 549263 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 549263 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 549263' 00:33:23.615 killing process with pid 549263 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 549263 00:33:23.615 Received shutdown signal, test time was about 2.000000 seconds 00:33:23.615 00:33:23.615 Latency(us) 00:33:23.615 [2024-10-08T15:49:15.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.615 [2024-10-08T15:49:15.607Z] =================================================================================================================== 00:33:23.615 [2024-10-08T15:49:15.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.615 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 549263 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=549991 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 549991 /var/tmp/bperf.sock 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 549991 ']' 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.875 17:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.875 [2024-10-08 17:49:15.674546] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:23.875 [2024-10-08 17:49:15.674600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549991 ] 00:33:23.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.875 Zero copy mechanism will not be used. 00:33:23.876 [2024-10-08 17:49:15.751192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.876 [2024-10-08 17:49:15.804065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.815 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.076 nvme0n1 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:25.076 17:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.076 Zero copy mechanism will not be used. 00:33:25.076 Running I/O for 2 seconds... 00:33:25.076 [2024-10-08 17:49:17.028067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.076 [2024-10-08 17:49:17.028403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.076 [2024-10-08 17:49:17.028431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.076 [2024-10-08 17:49:17.035228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.076 [2024-10-08 17:49:17.035427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.076 [2024-10-08 17:49:17.035445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.076 [2024-10-08 17:49:17.039224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.076 [2024-10-08 17:49:17.039419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.076 [2024-10-08 17:49:17.039436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.076 [2024-10-08 17:49:17.043041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.076 [2024-10-08 17:49:17.043239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.076 [2024-10-08 17:49:17.043255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.047008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.047203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.047221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.050730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.050923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.050939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.054760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.054952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.054968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.058462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.058653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.058669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.062824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.063022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.063038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.077 [2024-10-08 17:49:17.067007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.077 [2024-10-08 17:49:17.067200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.077 [2024-10-08 17:49:17.067216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.070771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.070964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.070985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.074860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.075053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.075069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.079453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.079645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.083060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.083251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.083267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.090853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.091056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.097837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.098088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.098104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.102912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.103098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.103114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.106859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.107042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.107058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.111018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.111198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.111214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.114877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.115061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.115077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.119723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.119916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.119935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.126639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.126820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.126836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.131056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.131236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.131252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.134648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.134828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.338 [2024-10-08 17:49:17.134844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.338 [2024-10-08 17:49:17.139143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.338 [2024-10-08 17:49:17.139346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.139362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.149176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.149442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.149459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.157661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.157768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.157783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.167001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.167255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.167270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.177460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.177538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.177553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.187098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.187341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.187356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.196986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.197213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.197228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.206003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.206296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.215531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.215799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.215814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.225652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.225758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.225774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.234499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.234764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.234780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.242720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.242808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.242823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.250270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.250365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.250381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.253634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.253698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.258156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.258213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.258229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.262041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.262111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.262126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.265315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.265383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.265398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.268545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.268589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.268604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.272551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.272601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.272616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.278850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.278905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.278920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.282743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.282791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.282806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.286957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.287010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.287025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.292275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.292321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.292339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.296101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.296160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.296175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.299609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.299654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.299668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.302691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.302747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.302763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.309193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.309273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.309289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.313552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.313598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.313613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.316632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.339 [2024-10-08 17:49:17.316677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.339 [2024-10-08 17:49:17.316692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.339 [2024-10-08 17:49:17.319511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.340 [2024-10-08 17:49:17.319561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.340 [2024-10-08 17:49:17.319575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.340 [2024-10-08 17:49:17.322579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.340 [2024-10-08 17:49:17.322676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.340 [2024-10-08 17:49:17.322691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.340 [2024-10-08 17:49:17.326086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.340 [2024-10-08 17:49:17.326159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.340 [2024-10-08 17:49:17.326174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.340 [2024-10-08 17:49:17.328903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.340 [2024-10-08 17:49:17.328955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.340 [2024-10-08 17:49:17.328970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.331632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.331694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.331710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.334474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.334527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.334542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.337148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.337222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.340589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.340664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.340678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.343159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.343229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.343244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.345643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.349048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.349112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.349127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.352131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.352191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.352206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.354662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.354730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.354745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.357226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.357300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.357315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.360468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.360577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.602 [2024-10-08 17:49:17.367486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.602 [2024-10-08 17:49:17.367626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.602 [2024-10-08 17:49:17.367642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.375502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.375574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.375589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.379573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.379663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.379679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.383354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.383425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.383441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.386861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.386930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.386948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.390419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.390473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.390488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.393729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.393784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.393800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.396863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.396924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.400205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.400264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.403307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.403365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.403381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.406562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.406617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.406632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.410432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.410512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.413972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.414065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.417471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.417537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.417552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.421491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.421556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.421572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.424457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.424516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.424531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.427495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.427557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.427571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.430273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.430353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.430368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.433028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.433083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.435799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.435855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.435870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.438480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.438535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.438550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.440952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.441020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.441038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.443454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.443510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.443525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.445915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.445996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.446011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.448417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.448477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.448492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.450886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.450942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.450958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.453345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.453401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.453416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.455788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.455844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.455859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.458250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.603 [2024-10-08 17:49:17.458304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.603 [2024-10-08 17:49:17.458320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.603 [2024-10-08 17:49:17.460689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.460745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.460760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.463095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.463154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.463169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.465602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.465660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.465675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.468288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.468341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.468355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.474085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.474149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.474165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.480073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.480127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.480142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.484758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.484813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.484828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.489659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.489713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.489728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.493255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.493326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.496303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.496363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.496378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.498873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.498952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.498967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.501618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.501682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.501697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.504961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.505036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.505051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.507857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.507924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.507939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.510398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.510470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.512872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.512928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.512943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.515367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.515409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.515424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.517796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.517862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.517877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.520593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.520637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.520654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.523982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.524036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.524051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.527326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.527369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.527384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.531225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.531310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.531325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.535120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.535309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.535324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.541689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.541922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.541937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.550206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.550270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.550285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.559698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.559941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.559965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.569818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.570092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.578534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.578639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.578654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.604 [2024-10-08 17:49:17.581868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.604 [2024-10-08 17:49:17.581918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.604 [2024-10-08 17:49:17.581933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.605 [2024-10-08 17:49:17.584526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.605 [2024-10-08 17:49:17.584592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.605 [2024-10-08 17:49:17.584607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.605 [2024-10-08 17:49:17.587284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.605 [2024-10-08 17:49:17.587337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.605 [2024-10-08 17:49:17.587352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.605 [2024-10-08 17:49:17.590175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.605 [2024-10-08 17:49:17.590221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.605 [2024-10-08 17:49:17.590236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.605 [2024-10-08 17:49:17.592909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.605 [2024-10-08 17:49:17.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.605 [2024-10-08 17:49:17.592993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.866 [2024-10-08 17:49:17.597863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.597929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.597944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.600417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.600465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.600480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.602928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.602981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.602996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.605848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.605959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.605979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.609166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.609216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.609230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.614130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.614420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.614437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.618006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.618060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.618075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.620508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.620561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.620576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.623174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.623234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.625669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.625737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.625752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.629009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.629108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.631606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.631653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.631671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.634121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.634180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.634196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.636643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.636692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.636707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.639175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.639227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.639242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.642046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.642092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.642107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.645070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.645170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.645186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.651102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.651380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.651395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.660760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.661008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.661024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.669251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.669542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.669559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.677845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.678117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.678132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.687227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.687446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.687461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.690939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.691005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.691020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.693677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.693733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.693748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.696419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.696478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.696493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.699100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.867 [2024-10-08 17:49:17.699154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.867 [2024-10-08 17:49:17.699169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.867 [2024-10-08 17:49:17.701751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.701827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.701842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.705099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.705203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.708273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.708387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.712160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.712261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.712276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.722323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.722591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.722613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.732692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.732941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.732956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.742904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.743151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.743166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.752694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.752923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.752938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.762570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.762795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.762810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.772667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.772866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.783315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.783554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.783569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.793865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.794178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.794197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.803905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.804162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.813669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.813740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.813755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.824096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.824299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.824314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.834642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.834731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.834746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.843411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.843720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.843736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.868 [2024-10-08 17:49:17.852496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:25.868 [2024-10-08 17:49:17.852720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.868 [2024-10-08 17:49:17.852735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.860541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.860642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.860657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.870847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.871046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.871061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.880521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.880769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.886776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.887024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.887039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.897043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.897297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.897312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.902368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.902450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.902465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.905312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.905378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.908032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.908086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.908101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.910681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.910728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.910743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.913350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.913394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.913409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.915921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.915989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.916004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.918609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.918664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.918679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.921456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.921513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.921529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.924421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.924491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.924507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.927719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.927796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.932019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.932214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.932229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.935131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.935193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.935208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.939174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.939253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.939267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.943144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.943229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.943244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.947616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.947870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.947888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.953437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.953536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.953552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.959621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.130 [2024-10-08 17:49:17.959710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.130 [2024-10-08 17:49:17.965996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.130 [2024-10-08 17:49:17.966140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.966156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.974275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.974342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.974357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.980332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.980394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.980409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.987570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.987631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.987647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.990304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.990365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.990380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.992953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.993027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.993042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.995689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.995748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.995764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:17.998446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:17.998516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:17.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.001161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.001212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.001227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.003888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.003955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.003970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.006416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.006488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.006502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.008969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.009031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.009046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.015216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 6476.00 IOPS, 809.50 MiB/s [2024-10-08T15:49:18.123Z] [2024-10-08 17:49:18.015371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.015385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.021147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.021215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.021230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.029260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.029553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.029568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.036777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.037064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.045718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.045766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.045782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.050062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.050109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.050124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.053370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.053423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.053438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.057971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.058038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.058053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.065503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.065566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.069675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.069733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.069748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.074396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.074735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.082040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.082306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.082324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.088236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.088441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.088456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.096211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.096257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.096272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.104454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.104659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.104674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.131 [2024-10-08 17:49:18.113605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.131 [2024-10-08 17:49:18.113791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.131 [2024-10-08 17:49:18.113806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.122909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.123010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.123025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.132809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.133103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.133119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.142515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.142738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.142753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.152511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.152762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.162538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.162712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.173018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.173274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.173289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.182766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.183011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.193375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.193624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.193639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.203719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.203965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.203985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.213540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.213764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.213779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.223994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.224283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.224299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.234374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.234597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.234612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.244507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.244730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.244750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.255036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.255267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.255282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.265068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.265131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.265147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.275511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.275804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.275820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.285420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.285631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.285646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.295523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.295800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.295815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.306079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.306358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.306374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.316156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.316464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.316480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.326488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.326751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.326767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.337487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.337758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.347678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.347924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.393 [2024-10-08 17:49:18.347939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.393 [2024-10-08 17:49:18.357828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.393 [2024-10-08 17:49:18.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.394 [2024-10-08 17:49:18.367921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.394 [2024-10-08 17:49:18.368163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.368179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.394 [2024-10-08 17:49:18.376027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.394 [2024-10-08 17:49:18.376083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.376098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.394 [2024-10-08 17:49:18.378824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.394 [2024-10-08 17:49:18.378910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.378925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.394 [2024-10-08 17:49:18.381542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.394 [2024-10-08 17:49:18.381603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.381617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.394 [2024-10-08 17:49:18.384193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.394 [2024-10-08 17:49:18.384267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.394 [2024-10-08 17:49:18.384281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.386780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.386832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.386847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.389383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.389439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.389454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.392094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.392158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.392173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.394694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.394757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.394772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.397371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.397425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.397440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.399880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.399934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.399949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.402392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.402469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.406646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.406929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.406946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.414122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.414212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.414227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.418042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.418102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.418119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.426055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.426173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.426189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.434954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.435268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.435284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.446047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.446336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.446352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.456397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.456705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.456721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.466787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.467068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.467084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.476874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.477139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.477154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.487363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.487586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.487601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.497458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.497725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.497739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.507676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.507954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.507970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.518480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.518747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.518763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.529270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.529483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.529498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.537211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.537462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.537478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.541322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.541423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.541439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.544838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.544959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.544979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.548358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.548460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.548475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.551524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.655 [2024-10-08 17:49:18.551588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.655 [2024-10-08 17:49:18.551603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.655 [2024-10-08 17:49:18.554405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.554469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.554484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.557223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.557297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.560189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.560240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.560255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.563259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.563326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.563341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.565937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.565991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.566006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.568474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.568529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.568545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.571071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.571130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.571144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.573878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.573939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.573953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.576415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.576488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.576503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.579425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.579509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.579528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.585681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.585927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.585942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.595433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.595703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.595718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.605524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.605797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.615846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.626068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.626293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.626308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.656 [2024-10-08 17:49:18.636490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.656 [2024-10-08 17:49:18.636786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.656 [2024-10-08 17:49:18.636802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.646870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.647086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.647102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.657464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.657567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.657582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.667378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.667644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.667660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.677765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.678058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.678073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.688005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.688077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.688093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.695834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.695892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.695907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.698554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.698621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.698636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.701184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.701252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.701267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.703764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.703820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.703835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.706397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.706450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.706465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.708967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.709019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.709034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.711561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.711613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.714020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.917 [2024-10-08 17:49:18.714075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.917 [2024-10-08 17:49:18.714090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.917 [2024-10-08 17:49:18.716600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.716654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.716669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.719170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.721628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.721698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.721713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.724531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.724665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.724680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.727785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.727854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.727869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.735005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.735064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.735079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.740504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.740752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.740770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.745652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.745751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.745767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.748447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.748537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.748552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.751211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.751289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.751304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.753984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.754063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.757081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.757189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.757205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.760086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.760163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.760179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.764964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.765247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.765262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.772862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.773174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.773190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.778036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.778128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.780890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.780993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.781009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.783703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.783791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.783807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.786514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.786599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.786614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.791513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.791603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.791619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.798624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.798713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.798728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.801478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.801564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.801579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.804414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.804554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.804569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.807949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.808062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.810629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.810723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.810738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.815590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.815682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.815697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.818285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.818371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.818387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.820902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.820999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.821015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.918 [2024-10-08 17:49:18.823609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.918 [2024-10-08 17:49:18.823700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.918 [2024-10-08 17:49:18.823716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.826930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.827050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.827065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.834831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.835082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.835098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.844317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.844612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.844637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.853807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.854056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.854074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.860449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.860525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.860541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.864864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.865124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.865139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.873157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.873401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.873416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.881088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.881332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.881348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.889251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.889529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.895135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.895214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.895229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.899098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.899174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.899189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.919 [2024-10-08 17:49:18.903137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:26.919 [2024-10-08 17:49:18.903412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.919 [2024-10-08 17:49:18.903427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.911335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.911429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.911444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.916545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.916876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.916892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.924550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.924633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.924648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.930864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.930958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.930978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.937061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.937138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.937153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.945722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.945778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.945794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.953011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.953278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.953303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.958723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.958775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.958790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.963568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.963800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.963815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.971949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.972003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.972018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.981334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.981387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.981403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:18.990559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:18.990823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:18.990838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:19.000801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:19.001105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:19.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.185 [2024-10-08 17:49:19.011476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:19.011741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:19.011757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.185 5633.50 IOPS, 704.19 MiB/s [2024-10-08T15:49:19.177Z] [2024-10-08 17:49:19.021881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf2cdc0) with pdu=0x2000198fef90 00:33:27.185 [2024-10-08 17:49:19.022192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.185 [2024-10-08 17:49:19.022209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.185 00:33:27.185 Latency(us) 00:33:27.185 [2024-10-08T15:49:19.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.185 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:27.185 nvme0n1 : 2.01 5623.70 702.96 0.00 0.00 2838.41 1174.19 11960.32 00:33:27.185 [2024-10-08T15:49:19.177Z] =================================================================================================================== 00:33:27.185 [2024-10-08T15:49:19.177Z] Total : 5623.70 702.96 0.00 0.00 2838.41 1174.19 11960.32 00:33:27.185 { 00:33:27.185 "results": [ 00:33:27.185 { 00:33:27.185 "job": "nvme0n1", 00:33:27.185 "core_mask": "0x2", 00:33:27.185 "workload": "randwrite", 00:33:27.185 "status": "finished", 00:33:27.185 "queue_depth": 16, 00:33:27.185 "io_size": 131072, 00:33:27.185 "runtime": 2.007042, 00:33:27.185 "iops": 5623.698955976009, 00:33:27.185 "mibps": 702.9623694970011, 00:33:27.185 "io_failed": 0, 00:33:27.185 "io_timeout": 0, 00:33:27.185 "avg_latency_us": 2838.412115413012, 00:33:27.185 "min_latency_us": 1174.1866666666667, 00:33:27.185 "max_latency_us": 11960.32 00:33:27.185 } 00:33:27.185 ], 00:33:27.185 "core_count": 1 00:33:27.185 } 00:33:27.185 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:27.185 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:27.185 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:27.185 | .driver_specific 00:33:27.185 | .nvme_error 00:33:27.185 | .status_code 00:33:27.185 | .command_transient_transport_error' 00:33:27.185 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:27.446 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 364 > 0 )) 00:33:27.446 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 549991 00:33:27.446 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 549991 ']' 00:33:27.446 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 549991 00:33:27.446 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 549991 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 549991' 00:33:27.447 killing process with pid 549991 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 549991 00:33:27.447 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.447 00:33:27.447 Latency(us) 00:33:27.447 [2024-10-08T15:49:19.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.447 [2024-10-08T15:49:19.439Z] =================================================================================================================== 00:33:27.447 [2024-10-08T15:49:19.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 549991 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 547548 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 547548 ']' 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 547548 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.447 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 547548 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 547548' 00:33:27.708 killing process with pid 547548 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 547548 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 547548 00:33:27.708 00:33:27.708 real 0m16.640s 00:33:27.708 user 0m32.886s 00:33:27.708 sys 0m3.704s 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.708 ************************************ 00:33:27.708 END TEST nvmf_digest_error 00:33:27.708 ************************************ 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.708 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.708 rmmod nvme_tcp 00:33:27.708 rmmod nvme_fabrics 00:33:27.708 rmmod nvme_keyring 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 547548 ']' 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 547548 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 547548 ']' 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 547548 00:33:27.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (547548) - No such process 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 547548 is not found' 00:33:27.968 Process with pid 547548 is not found 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.968 17:49:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.880 00:33:29.880 real 0m43.454s 00:33:29.880 user 1m8.358s 00:33:29.880 sys 0m13.157s 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:29.880 ************************************ 00:33:29.880 END TEST nvmf_digest 00:33:29.880 ************************************ 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:29.880 17:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.142 ************************************ 00:33:30.142 START TEST nvmf_bdevperf 00:33:30.142 ************************************ 00:33:30.142 17:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:30.142 * Looking for test storage... 00:33:30.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.142 17:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:30.142 17:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:33:30.142 17:49:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:30.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.142 --rc genhtml_branch_coverage=1 00:33:30.142 --rc genhtml_function_coverage=1 00:33:30.142 --rc genhtml_legend=1 00:33:30.142 --rc geninfo_all_blocks=1 00:33:30.142 --rc geninfo_unexecuted_blocks=1 00:33:30.142 00:33:30.142 ' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:30.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.142 --rc genhtml_branch_coverage=1 00:33:30.142 --rc genhtml_function_coverage=1 00:33:30.142 --rc genhtml_legend=1 00:33:30.142 --rc geninfo_all_blocks=1 00:33:30.142 --rc geninfo_unexecuted_blocks=1 00:33:30.142 00:33:30.142 ' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:30.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.142 --rc genhtml_branch_coverage=1 00:33:30.142 --rc genhtml_function_coverage=1 00:33:30.142 --rc genhtml_legend=1 00:33:30.142 --rc geninfo_all_blocks=1 00:33:30.142 --rc geninfo_unexecuted_blocks=1 00:33:30.142 00:33:30.142 ' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:30.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.142 --rc genhtml_branch_coverage=1 00:33:30.142 --rc genhtml_function_coverage=1 00:33:30.142 --rc genhtml_legend=1 00:33:30.142 --rc geninfo_all_blocks=1 00:33:30.142 --rc geninfo_unexecuted_blocks=1 00:33:30.142 00:33:30.142 ' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.142 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:30.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.143 17:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.281 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:38.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:38.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:38.282 Found net devices under 0000:31:00.0: cvl_0_0 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:38.282 Found net devices under 0000:31:00.1: cvl_0_1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:33:38.282 00:33:38.282 --- 10.0.0.2 ping statistics --- 00:33:38.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.282 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:33:38.282 00:33:38.282 --- 10.0.0.1 ping statistics --- 00:33:38.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.282 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=555038 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 555038 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 555038 ']' 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.282 17:49:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.282 [2024-10-08 17:49:29.840252] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:38.282 [2024-10-08 17:49:29.840325] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.282 [2024-10-08 17:49:29.929584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:38.282 [2024-10-08 17:49:30.025716] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.282 [2024-10-08 17:49:30.025777] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.282 [2024-10-08 17:49:30.025787] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.282 [2024-10-08 17:49:30.025794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.282 [2024-10-08 17:49:30.025801] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.282 [2024-10-08 17:49:30.027305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.282 [2024-10-08 17:49:30.027463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.282 [2024-10-08 17:49:30.027463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 [2024-10-08 17:49:30.697836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 Malloc0 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.854 [2024-10-08 17:49:30.777504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:38.854 { 00:33:38.854 "params": { 00:33:38.854 "name": "Nvme$subsystem", 00:33:38.854 "trtype": "$TEST_TRANSPORT", 00:33:38.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:38.854 "adrfam": "ipv4", 00:33:38.854 "trsvcid": "$NVMF_PORT", 00:33:38.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:38.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:38.854 "hdgst": ${hdgst:-false}, 00:33:38.854 "ddgst": ${ddgst:-false} 00:33:38.854 }, 00:33:38.854 "method": "bdev_nvme_attach_controller" 00:33:38.854 } 00:33:38.854 EOF 00:33:38.854 )") 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:33:38.854 17:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:38.854 "params": { 00:33:38.854 "name": "Nvme1", 00:33:38.854 "trtype": "tcp", 00:33:38.854 "traddr": "10.0.0.2", 00:33:38.854 "adrfam": "ipv4", 00:33:38.854 "trsvcid": "4420", 00:33:38.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:38.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:38.855 "hdgst": false, 00:33:38.855 "ddgst": false 00:33:38.855 }, 00:33:38.855 "method": "bdev_nvme_attach_controller" 00:33:38.855 }' 00:33:38.855 [2024-10-08 17:49:30.836687] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:38.855 [2024-10-08 17:49:30.836755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555385 ] 00:33:39.115 [2024-10-08 17:49:30.921383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.115 [2024-10-08 17:49:31.020451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.375 Running I/O for 1 seconds... 00:33:40.316 8548.00 IOPS, 33.39 MiB/s 00:33:40.316 Latency(us) 00:33:40.316 [2024-10-08T15:49:32.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.317 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.317 Verification LBA range: start 0x0 length 0x4000 00:33:40.317 Nvme1n1 : 1.01 8577.44 33.51 0.00 0.00 14862.31 3031.04 15728.64 00:33:40.317 [2024-10-08T15:49:32.309Z] =================================================================================================================== 00:33:40.317 [2024-10-08T15:49:32.309Z] Total : 8577.44 33.51 0.00 0.00 14862.31 3031.04 15728.64 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=555710 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:40.578 { 00:33:40.578 "params": { 00:33:40.578 "name": "Nvme$subsystem", 00:33:40.578 "trtype": "$TEST_TRANSPORT", 00:33:40.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.578 "adrfam": "ipv4", 00:33:40.578 "trsvcid": "$NVMF_PORT", 00:33:40.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.578 "hdgst": ${hdgst:-false}, 00:33:40.578 "ddgst": ${ddgst:-false} 00:33:40.578 }, 00:33:40.578 "method": "bdev_nvme_attach_controller" 00:33:40.578 } 00:33:40.578 EOF 00:33:40.578 )") 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:33:40.578 17:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:40.578 "params": { 00:33:40.578 "name": "Nvme1", 00:33:40.578 "trtype": "tcp", 00:33:40.578 "traddr": "10.0.0.2", 00:33:40.578 "adrfam": "ipv4", 00:33:40.578 "trsvcid": "4420", 00:33:40.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.578 "hdgst": false, 00:33:40.578 "ddgst": false 00:33:40.578 }, 00:33:40.578 "method": "bdev_nvme_attach_controller" 00:33:40.578 }' 00:33:40.578 [2024-10-08 17:49:32.468703] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:40.578 [2024-10-08 17:49:32.468785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555710 ] 00:33:40.578 [2024-10-08 17:49:32.549549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.839 [2024-10-08 17:49:32.645367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.099 Running I/O for 15 seconds... 00:33:42.980 8859.00 IOPS, 34.61 MiB/s [2024-10-08T15:49:35.544Z] 8919.00 IOPS, 34.84 MiB/s [2024-10-08T15:49:35.544Z] 17:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 555038 00:33:43.552 17:49:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:43.552 [2024-10-08 17:49:35.431514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.552 [2024-10-08 17:49:35.431725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.552 [2024-10-08 17:49:35.431733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.431959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.431967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.553 [2024-10-08 17:49:35.432500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.553 [2024-10-08 17:49:35.432518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.553 [2024-10-08 17:49:35.432534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.553 [2024-10-08 17:49:35.432544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.553 [2024-10-08 17:49:35.432551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.432568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.432601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.432618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.432983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.432991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.554 [2024-10-08 17:49:35.433163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.554 [2024-10-08 17:49:35.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.554 [2024-10-08 17:49:35.433272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.555 [2024-10-08 17:49:35.433448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.555 [2024-10-08 17:49:35.433717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.555 [2024-10-08 17:49:35.433835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ea560 is same with the state(6) to be set 00:33:43.555 [2024-10-08 17:49:35.433854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:43.555 [2024-10-08 17:49:35.433860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:43.555 [2024-10-08 17:49:35.433867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:33:43.555 [2024-10-08 17:49:35.433874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433912] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14ea560 was disconnected and freed. reset controller. 00:33:43.555 [2024-10-08 17:49:35.433959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.555 [2024-10-08 17:49:35.433969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.555 [2024-10-08 17:49:35.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.433999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.555 [2024-10-08 17:49:35.434006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.434017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.555 [2024-10-08 17:49:35.434024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.555 [2024-10-08 17:49:35.434031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.555 [2024-10-08 17:49:35.437554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.555 [2024-10-08 17:49:35.437575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.555 [2024-10-08 17:49:35.438390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.438427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.438439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.438680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.438903] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.438912] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.438922] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.442478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.451681] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.452356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.452396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.452407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.452646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.452868] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.452877] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.452885] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.456440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.465633] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.466278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.466318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.466328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.466568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.466791] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.466800] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.466813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.470371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.479575] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.480232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.480272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.480284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.480524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.480747] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.480757] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.480765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.484325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.493517] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.494176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.494218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.494229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.494470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.494693] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.494701] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.494710] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.498268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.507480] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.508196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.508239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.508250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.508493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.508716] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.508725] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.508732] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.512290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.521277] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.521820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.521847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.521855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.522081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.522301] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.522310] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.522317] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.525858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.556 [2024-10-08 17:49:35.535257] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.556 [2024-10-08 17:49:35.535805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.556 [2024-10-08 17:49:35.535823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.556 [2024-10-08 17:49:35.535831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.556 [2024-10-08 17:49:35.536058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.556 [2024-10-08 17:49:35.536277] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.556 [2024-10-08 17:49:35.536286] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.556 [2024-10-08 17:49:35.536294] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.556 [2024-10-08 17:49:35.539847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.549044] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.549685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.549729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.549741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.549993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.550217] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.550227] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.550235] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.553784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.562981] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.563544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.563568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.563576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.563797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.564030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.564039] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.564046] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.567593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.576797] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.577336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.577354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.577362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.577581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.577800] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.577809] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.577816] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.581364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.590768] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.591425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.591470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.591483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.591726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.591950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.591959] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.591967] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.595528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.604723] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.605327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.605351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.605359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.605580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.605800] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.605809] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.605816] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.609380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.618566] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.619228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.619278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.619290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.619536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.619761] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.619770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.619778] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.623345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.632555] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.633105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.633157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.633169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.633417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.633641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.633650] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.633658] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.637227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.646444] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.647045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.647070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.647079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.647301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.647521] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.647531] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.647539] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.651093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.660297] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.660941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.819 [2024-10-08 17:49:35.661007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.819 [2024-10-08 17:49:35.661026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.819 [2024-10-08 17:49:35.661277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.819 [2024-10-08 17:49:35.661502] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.819 [2024-10-08 17:49:35.661512] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.819 [2024-10-08 17:49:35.661520] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.819 [2024-10-08 17:49:35.665086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.819 [2024-10-08 17:49:35.674109] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.819 [2024-10-08 17:49:35.674729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.674757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.674765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.674995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.675218] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.675228] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.675236] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.678790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.688001] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.688561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.688584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.688592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.688812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.689041] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.689052] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.689060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.692623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.701836] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.702412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.702435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.702444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.702664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.702884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.702901] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.702909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.706482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.715693] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.716403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.716467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.716480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.716736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.716961] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.716970] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.716992] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.720566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.729574] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.730312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.730374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.730387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.730642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.730869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.730878] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.730887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.734466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.743498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.744090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.744120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.744129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.744351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.744572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.744583] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.744591] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.748158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.757376] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.757989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.758013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.758021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.758242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.758463] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.758472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.758479] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.762038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.771247] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.771797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.771820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.771828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.772057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.772278] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.772287] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.772295] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.775875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.785087] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.785661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.785683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.785691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.785912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.786141] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.786151] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.786159] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.789715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.820 [2024-10-08 17:49:35.798951] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.820 [2024-10-08 17:49:35.799514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.820 [2024-10-08 17:49:35.799536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:43.820 [2024-10-08 17:49:35.799545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:43.820 [2024-10-08 17:49:35.799772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:43.820 [2024-10-08 17:49:35.800002] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.820 [2024-10-08 17:49:35.800011] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.820 [2024-10-08 17:49:35.800019] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.820 [2024-10-08 17:49:35.803585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.082 [2024-10-08 17:49:35.812822] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.082 [2024-10-08 17:49:35.813395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.082 [2024-10-08 17:49:35.813417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.082 [2024-10-08 17:49:35.813426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.082 [2024-10-08 17:49:35.813647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.082 [2024-10-08 17:49:35.813866] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.082 [2024-10-08 17:49:35.813875] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.082 [2024-10-08 17:49:35.813883] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.082 [2024-10-08 17:49:35.817455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.082 [2024-10-08 17:49:35.826682] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.082 [2024-10-08 17:49:35.827359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.082 [2024-10-08 17:49:35.827423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.827436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.827691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.827919] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.827928] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.827937] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.831537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.840600] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.841353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.841416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.841429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.841685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.841912] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.841921] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.841937] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.845520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.854531] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.855276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.855341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.855354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.855609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.855835] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.855844] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.855853] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.859441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.868475] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.869125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.869189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.869201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.869456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.869682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.869692] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.869701] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.873303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.882318] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.882910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.882939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.882948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.883178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.883399] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.883409] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.883416] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.886980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.896203] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.896771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.896795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.896804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.897037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.897259] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.897268] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.897276] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.900828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.910048] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.910737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.910801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.910815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.911083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.911311] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.911321] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.911330] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.914901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.923921] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.924525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.924555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.924564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.924787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.925018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.925029] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.925037] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.928605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.937831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.938524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.938589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.938602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.938865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.939105] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.939116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.939124] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.942703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 7743.33 IOPS, 30.25 MiB/s [2024-10-08T15:49:36.075Z] [2024-10-08 17:49:35.953384] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.954119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.954182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.954196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.083 [2024-10-08 17:49:35.954452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.083 [2024-10-08 17:49:35.954678] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.083 [2024-10-08 17:49:35.954687] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.083 [2024-10-08 17:49:35.954696] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.083 [2024-10-08 17:49:35.958280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.083 [2024-10-08 17:49:35.967281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.083 [2024-10-08 17:49:35.967912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.083 [2024-10-08 17:49:35.967940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.083 [2024-10-08 17:49:35.967949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:35.968183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:35.968405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:35.968415] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:35.968422] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:35.971981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:35.981206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:35.981771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:35.981794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:35.981803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:35.982038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:35.982260] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:35.982270] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:35.982285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:35.985858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:35.995078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:35.995695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:35.995717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:35.995726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:35.995946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:35.996179] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:35.996189] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:35.996197] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:35.999750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:36.008954] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:36.009522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:36.009545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:36.009553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:36.009774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:36.010005] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:36.010015] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:36.010023] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:36.013576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:36.022768] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:36.023374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:36.023396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:36.023404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:36.023625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:36.023846] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:36.023855] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:36.023863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:36.027420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:36.036604] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:36.037306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:36.037375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:36.037388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:36.037643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:36.037870] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:36.037880] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:36.037888] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:36.041492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:36.050494] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:36.051239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:36.051301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:36.051314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:36.051569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:36.051795] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:36.051804] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:36.051812] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:36.055396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.084 [2024-10-08 17:49:36.064396] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.084 [2024-10-08 17:49:36.065085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.084 [2024-10-08 17:49:36.065147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.084 [2024-10-08 17:49:36.065159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.084 [2024-10-08 17:49:36.065415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.084 [2024-10-08 17:49:36.065640] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.084 [2024-10-08 17:49:36.065649] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.084 [2024-10-08 17:49:36.065658] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.084 [2024-10-08 17:49:36.069246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.077377] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.077904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.077930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.077937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.078100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.078262] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.078268] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.078273] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.080724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.090043] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.090671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.090723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.090732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.090914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.091084] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.091092] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.091098] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.093551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.102739] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.103235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.103283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.103292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.103471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.103627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.103634] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.103642] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.106106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.115426] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.116007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.116052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.116061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.116237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.116392] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.116399] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.116405] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.118872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.128047] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.128621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.128660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.128669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.128842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.129007] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.129015] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.129021] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.131464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.140785] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.141421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.141459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.141468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.141640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.141795] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.141801] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.141807] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.144258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.153435] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.154031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.154068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.154077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.154251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.154405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.154411] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.154416] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.156868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.166181] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.166763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.166799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.166812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.166991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.167146] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.167153] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.167158] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.169596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.178904] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.179435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.179468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.179477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.179645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.346 [2024-10-08 17:49:36.179798] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.346 [2024-10-08 17:49:36.179805] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.346 [2024-10-08 17:49:36.179811] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.346 [2024-10-08 17:49:36.182256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.346 [2024-10-08 17:49:36.191556] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.346 [2024-10-08 17:49:36.192199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.346 [2024-10-08 17:49:36.192231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.346 [2024-10-08 17:49:36.192240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.346 [2024-10-08 17:49:36.192408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.192561] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.192568] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.192574] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.195018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.204189] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.204680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.204695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.204700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.204851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.205007] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.205018] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.205023] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.207450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.216894] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.217445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.217476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.217485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.217651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.217805] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.217811] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.217816] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.220259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.229560] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.230131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.230162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.230170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.230337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.230490] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.230497] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.230502] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.232941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.242255] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.242816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.242846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.242855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.243029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.243183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.243189] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.243195] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.245627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.254940] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.255516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.255546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.255555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.255721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.255874] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.255880] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.255886] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.258323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.267629] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.268201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.268231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.268240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.268407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.268560] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.268566] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.268571] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.271011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.280324] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.280860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.280890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.280899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.281074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.281228] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.281235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.281240] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.283672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.292977] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.293549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.293579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.293588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.293760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.293914] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.293920] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.293926] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.296366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.305660] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.306150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.306166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.306172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.306323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.306473] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.306479] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.306484] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.308911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.318349] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.318693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.347 [2024-10-08 17:49:36.318706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.347 [2024-10-08 17:49:36.318711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.347 [2024-10-08 17:49:36.318861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.347 [2024-10-08 17:49:36.319016] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.347 [2024-10-08 17:49:36.319022] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.347 [2024-10-08 17:49:36.319027] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.347 [2024-10-08 17:49:36.321452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.347 [2024-10-08 17:49:36.331032] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.347 [2024-10-08 17:49:36.331487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.348 [2024-10-08 17:49:36.331499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.348 [2024-10-08 17:49:36.331504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.348 [2024-10-08 17:49:36.331654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.348 [2024-10-08 17:49:36.331804] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.348 [2024-10-08 17:49:36.331810] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.348 [2024-10-08 17:49:36.331818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.348 [2024-10-08 17:49:36.334252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.343707] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.344215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.344245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.344254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.344420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.344573] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.344579] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.344585] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.347027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.356332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.356901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.356931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.356940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.357114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.357268] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.357274] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.357279] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.359712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.369013] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.369552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.369582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.369590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.369756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.369909] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.369915] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.369921] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.372361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.381672] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.382231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.382261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.382270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.382436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.382589] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.382595] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.382600] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.385041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.394343] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.394793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.394807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.394813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.394963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.395121] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.395127] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.395132] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.397558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.609 [2024-10-08 17:49:36.407129] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.609 [2024-10-08 17:49:36.407581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.609 [2024-10-08 17:49:36.407612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.609 [2024-10-08 17:49:36.407620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.609 [2024-10-08 17:49:36.407787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.609 [2024-10-08 17:49:36.407940] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.609 [2024-10-08 17:49:36.407946] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.609 [2024-10-08 17:49:36.407951] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.609 [2024-10-08 17:49:36.410392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.419838] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.420403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.420433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.420441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.420611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.420764] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.420771] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.420776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.423217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.432532] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.433119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.433149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.433158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.433324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.433477] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.433484] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.433489] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.435928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.445248] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.445873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.445903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.445912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.446086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.446240] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.446246] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.446252] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.448687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.457856] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.458383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.458398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.458403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.458554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.458704] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.458710] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.458718] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.461153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.470518] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.471097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.471127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.471136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.471305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.471458] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.471464] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.471470] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.473916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.483224] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.483710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.483725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.483730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.483881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.484037] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.484043] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.484048] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.486478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.495916] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.496451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.496481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.496490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.496656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.496809] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.496816] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.496821] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.499261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.508580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.509172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.509206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.509214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.509381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.509534] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.509540] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.509545] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.511984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.521281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.521847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.521877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.521885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.522059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.522214] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.522220] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.522226] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.524657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.533954] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.534557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.534587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.534596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.534762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.534915] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.534922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.534927] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.610 [2024-10-08 17:49:36.537367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.610 [2024-10-08 17:49:36.546682] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.610 [2024-10-08 17:49:36.547236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.610 [2024-10-08 17:49:36.547266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.610 [2024-10-08 17:49:36.547275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.610 [2024-10-08 17:49:36.547440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.610 [2024-10-08 17:49:36.547597] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.610 [2024-10-08 17:49:36.547603] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.610 [2024-10-08 17:49:36.547609] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.611 [2024-10-08 17:49:36.550048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.611 [2024-10-08 17:49:36.559353] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.611 [2024-10-08 17:49:36.559893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.611 [2024-10-08 17:49:36.559923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.611 [2024-10-08 17:49:36.559932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.611 [2024-10-08 17:49:36.560106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.611 [2024-10-08 17:49:36.560260] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.611 [2024-10-08 17:49:36.560266] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.611 [2024-10-08 17:49:36.560272] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.611 [2024-10-08 17:49:36.562702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.611 [2024-10-08 17:49:36.572005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.611 [2024-10-08 17:49:36.572573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.611 [2024-10-08 17:49:36.572603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.611 [2024-10-08 17:49:36.572612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.611 [2024-10-08 17:49:36.572778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.611 [2024-10-08 17:49:36.572932] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.611 [2024-10-08 17:49:36.572938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.611 [2024-10-08 17:49:36.572943] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.611 [2024-10-08 17:49:36.575389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.611 [2024-10-08 17:49:36.584692] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.611 [2024-10-08 17:49:36.585278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.611 [2024-10-08 17:49:36.585308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.611 [2024-10-08 17:49:36.585317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.611 [2024-10-08 17:49:36.585483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.611 [2024-10-08 17:49:36.585637] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.611 [2024-10-08 17:49:36.585643] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.611 [2024-10-08 17:49:36.585649] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.611 [2024-10-08 17:49:36.588092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.611 [2024-10-08 17:49:36.597387] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.611 [2024-10-08 17:49:36.597949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.611 [2024-10-08 17:49:36.597985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.611 [2024-10-08 17:49:36.597994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.611 [2024-10-08 17:49:36.598160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.611 [2024-10-08 17:49:36.598313] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.611 [2024-10-08 17:49:36.598320] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.611 [2024-10-08 17:49:36.598325] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.611 [2024-10-08 17:49:36.600761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.872 [2024-10-08 17:49:36.610075] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.872 [2024-10-08 17:49:36.610639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.872 [2024-10-08 17:49:36.610669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.872 [2024-10-08 17:49:36.610678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.872 [2024-10-08 17:49:36.610844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.872 [2024-10-08 17:49:36.611006] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.872 [2024-10-08 17:49:36.611013] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.872 [2024-10-08 17:49:36.611019] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.872 [2024-10-08 17:49:36.613452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.872 [2024-10-08 17:49:36.622753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.872 [2024-10-08 17:49:36.623223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.872 [2024-10-08 17:49:36.623238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.872 [2024-10-08 17:49:36.623244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.872 [2024-10-08 17:49:36.623395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.872 [2024-10-08 17:49:36.623546] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.872 [2024-10-08 17:49:36.623552] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.623557] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.625986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.635429] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.635906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.635919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.635928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.636084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.636234] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.636240] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.636245] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.638671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.648114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.648592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.648605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.648610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.648760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.648910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.648916] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.648921] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.651352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.660787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.661380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.661410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.661419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.661585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.661738] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.661745] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.661750] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.664192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.673497] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.674061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.674091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.674100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.674269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.674422] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.674432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.674437] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.676877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.686176] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.686739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.686770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.686778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.686944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.687106] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.687113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.687118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.689551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.698858] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.699458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.699488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.699497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.699663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.699817] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.699823] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.699829] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.702268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.711578] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.712171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.712202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.712211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.712377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.712530] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.712536] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.712541] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.714982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.724283] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.724845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.724875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.724884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.725058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.725212] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.725218] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.725224] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.727657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.736959] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.737525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.737555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.737564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.737731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.737884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.737890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.737896] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.740336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.749645] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.750213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.750243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.750252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.750418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.750572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.873 [2024-10-08 17:49:36.750578] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.873 [2024-10-08 17:49:36.750583] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.873 [2024-10-08 17:49:36.753025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.873 [2024-10-08 17:49:36.762336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.873 [2024-10-08 17:49:36.762964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.873 [2024-10-08 17:49:36.763000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.873 [2024-10-08 17:49:36.763009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.873 [2024-10-08 17:49:36.763180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.873 [2024-10-08 17:49:36.763334] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.763340] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.763345] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.765784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.774960] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.775489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.775520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.775529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.775695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.775847] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.775854] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.775859] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.778299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.787599] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.788189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.788219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.788228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.788394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.788547] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.788553] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.788559] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.791000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.800300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.800865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.800896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.800905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.801078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.801232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.801238] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.801247] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.803681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.813001] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.813568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.813598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.813607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.813773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.813926] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.813933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.813938] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.816378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.825696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.826288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.826318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.826327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.826493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.826646] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.826653] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.826658] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.829099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.838411] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.838774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.838790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.838797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.838949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.839107] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.839113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.839118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.841548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.851156] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.874 [2024-10-08 17:49:36.851642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.874 [2024-10-08 17:49:36.851655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:44.874 [2024-10-08 17:49:36.851661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:44.874 [2024-10-08 17:49:36.851811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:44.874 [2024-10-08 17:49:36.851961] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.874 [2024-10-08 17:49:36.851967] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.874 [2024-10-08 17:49:36.851972] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.874 [2024-10-08 17:49:36.854408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.874 [2024-10-08 17:49:36.863854] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.135 [2024-10-08 17:49:36.864308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.135 [2024-10-08 17:49:36.864322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.135 [2024-10-08 17:49:36.864328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.135 [2024-10-08 17:49:36.864479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.135 [2024-10-08 17:49:36.864631] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.135 [2024-10-08 17:49:36.864636] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.135 [2024-10-08 17:49:36.864641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.135 [2024-10-08 17:49:36.867077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.135 [2024-10-08 17:49:36.876538] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.135 [2024-10-08 17:49:36.877000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.135 [2024-10-08 17:49:36.877030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.135 [2024-10-08 17:49:36.877039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.135 [2024-10-08 17:49:36.877205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.135 [2024-10-08 17:49:36.877358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.135 [2024-10-08 17:49:36.877365] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.877370] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.879809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.889257] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.889797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.889828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.889836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.890010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.890167] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.890174] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.890179] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.892611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.901914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.902469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.902499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.902508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.902674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.902827] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.902834] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.902839] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.905279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.914580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.915068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.915098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.915106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.915275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.915428] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.915434] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.915440] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.917877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.927188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.927738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.927769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.927778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.927944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.928104] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.928111] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.928116] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.930555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.939865] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.940327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.940342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.940348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.940499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.940650] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.940655] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.940660] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.943100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 5807.50 IOPS, 22.69 MiB/s [2024-10-08T15:49:37.128Z] [2024-10-08 17:49:36.953543] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.954079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.954110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.954119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.954287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.954440] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.954447] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.954453] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.956891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.966199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.966641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.966656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.966662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.966814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.966964] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.966970] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.966979] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.969407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.978862] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.979323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.979339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.979345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.979495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.979646] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.979651] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.979656] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.982086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:36.991537] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:36.992014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:36.992027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:36.992032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:36.992183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:36.992333] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:36.992338] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:36.992344] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:36.994773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:37.004215] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:37.004625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:37.004637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:37.004642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.136 [2024-10-08 17:49:37.004792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.136 [2024-10-08 17:49:37.004942] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.136 [2024-10-08 17:49:37.004948] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.136 [2024-10-08 17:49:37.004953] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.136 [2024-10-08 17:49:37.007383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.136 [2024-10-08 17:49:37.016826] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.136 [2024-10-08 17:49:37.017398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.136 [2024-10-08 17:49:37.017428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.136 [2024-10-08 17:49:37.017438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.017607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.017764] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.017771] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.017776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.020216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.029525] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.029988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.030004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.030010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.030161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.030312] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.030319] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.030324] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.032754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.042206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.042773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.042804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.042813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.042986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.043148] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.043155] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.043160] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.045593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.054922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.055421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.055436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.055442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.055593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.055744] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.055749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.055754] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.058191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.067647] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.068003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.068016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.068022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.068172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.068322] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.068328] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.068332] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.070758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.080352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.080726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.080738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.080743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.080893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.081047] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.081053] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.081058] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.083485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.093066] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.093548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.093578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.093586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.093755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.093909] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.093915] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.093921] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.096359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.105666] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.106149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.106165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.106174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.106325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.106476] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.106481] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.106486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.108911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.137 [2024-10-08 17:49:37.118361] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.137 [2024-10-08 17:49:37.118888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.137 [2024-10-08 17:49:37.118901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.137 [2024-10-08 17:49:37.118906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.137 [2024-10-08 17:49:37.119061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.137 [2024-10-08 17:49:37.119212] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.137 [2024-10-08 17:49:37.119217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.137 [2024-10-08 17:49:37.119222] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.137 [2024-10-08 17:49:37.121648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.399 [2024-10-08 17:49:37.131087] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.399 [2024-10-08 17:49:37.131574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.399 [2024-10-08 17:49:37.131586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.399 [2024-10-08 17:49:37.131591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.399 [2024-10-08 17:49:37.131741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.399 [2024-10-08 17:49:37.131892] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.399 [2024-10-08 17:49:37.131898] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.399 [2024-10-08 17:49:37.131902] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.399 [2024-10-08 17:49:37.134333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.399 [2024-10-08 17:49:37.143778] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.399 [2024-10-08 17:49:37.144322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.399 [2024-10-08 17:49:37.144352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.399 [2024-10-08 17:49:37.144361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.399 [2024-10-08 17:49:37.144527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.399 [2024-10-08 17:49:37.144681] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.399 [2024-10-08 17:49:37.144691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.399 [2024-10-08 17:49:37.144696] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.399 [2024-10-08 17:49:37.147139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.399 [2024-10-08 17:49:37.156445] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.399 [2024-10-08 17:49:37.156940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.399 [2024-10-08 17:49:37.156954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.156960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.157116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.157267] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.157272] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.157277] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.159703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.169154] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.169638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.169650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.169655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.169805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.169955] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.169960] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.169965] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.172397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.181844] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.182347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.182360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.182365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.182515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.182665] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.182671] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.182676] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.185105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.194549] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.195011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.195025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.195030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.195181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.195332] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.195337] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.195342] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.197770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.207212] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.207661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.207674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.207679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.207829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.207984] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.207990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.207995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.210422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.219866] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.220333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.220363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.220372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.220541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.220694] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.220700] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.220706] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.223147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.232599] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.233096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.233126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.233135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.233308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.233461] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.233467] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.233473] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.235910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.245231] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.245800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.245830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.245839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.246011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.246165] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.246172] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.246177] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.248609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.257914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.258346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.258376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.258385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.258551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.258704] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.258711] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.258716] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.261152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.270609] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.271189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.271219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.271227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.271396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.271549] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.271555] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.400 [2024-10-08 17:49:37.271565] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.400 [2024-10-08 17:49:37.274014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.400 [2024-10-08 17:49:37.283324] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.400 [2024-10-08 17:49:37.283894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.400 [2024-10-08 17:49:37.283924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.400 [2024-10-08 17:49:37.283933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.400 [2024-10-08 17:49:37.284108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.400 [2024-10-08 17:49:37.284263] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.400 [2024-10-08 17:49:37.284269] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.284274] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.286706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.296016] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.296614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.296645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.296653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.296820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.296980] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.296987] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.296992] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.299425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.308729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.309185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.309201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.309206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.309358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.309508] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.309514] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.309519] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.311948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.321394] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.321958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.321996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.322005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.322173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.322325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.322332] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.322337] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.324770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.334076] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.334559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.334574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.334580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.334730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.334881] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.334886] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.334891] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.337325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.346778] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.347269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.347299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.347308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.347474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.347628] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.347635] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.347640] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.350078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.359393] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.359879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.359893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.359899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.360058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.360209] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.360215] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.360220] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.362648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.372137] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.372582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.372596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.372601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.372752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.372902] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.372907] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.372912] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.375351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.401 [2024-10-08 17:49:37.384795] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.401 [2024-10-08 17:49:37.385281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.401 [2024-10-08 17:49:37.385295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.401 [2024-10-08 17:49:37.385301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.401 [2024-10-08 17:49:37.385452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.401 [2024-10-08 17:49:37.385602] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.401 [2024-10-08 17:49:37.385608] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.401 [2024-10-08 17:49:37.385613] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.401 [2024-10-08 17:49:37.388040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.663 [2024-10-08 17:49:37.397479] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.663 [2024-10-08 17:49:37.397799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.397811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.397816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.397966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.398121] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.398127] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.398133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.400563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.410148] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.410633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.410645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.410650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.410800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.410950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.410956] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.410961] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.413393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.422835] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.423411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.423442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.423451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.423617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.423770] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.423777] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.423782] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.426223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.435526] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.435982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.435998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.436003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.436154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.436304] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.436310] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.436315] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.438742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.448201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.448692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.448708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.448714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.448864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.449018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.449024] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.449029] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.451457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.460900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.461413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.461443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.461452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.461618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.461771] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.461778] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.461783] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.464223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.473535] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.474014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.474029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.474035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.474186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.474336] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.474342] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.474347] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.476776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.486223] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.486786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.486817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.486826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.486999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.487157] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.487163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.487168] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.489599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.498829] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.499310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.499326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.499332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.499483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.499780] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.499788] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.499793] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.502230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.511539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.512084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.512114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.664 [2024-10-08 17:49:37.512123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.664 [2024-10-08 17:49:37.512291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.664 [2024-10-08 17:49:37.512444] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.664 [2024-10-08 17:49:37.512451] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.664 [2024-10-08 17:49:37.512456] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.664 [2024-10-08 17:49:37.514895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.664 [2024-10-08 17:49:37.524208] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.664 [2024-10-08 17:49:37.524684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.664 [2024-10-08 17:49:37.524698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.524703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.524854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.525009] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.525015] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.525020] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.527447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.536898] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.537396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.537409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.537414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.537564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.537714] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.537720] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.537725] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.540155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.549606] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.550080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.550093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.550099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.550249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.550399] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.550404] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.550409] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.552835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.562301] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.562756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.562762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.562912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.563066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.563072] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.563077] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.565505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.574960] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.575503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.575534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.575549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.575715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.575868] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.575874] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.575880] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.578321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.587631] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.588235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.588266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.588275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.588441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.588595] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.588601] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.588606] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.591048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.600355] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.600839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.600854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.600860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.601015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.601166] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.601171] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.601176] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.603605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.613050] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.613630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.613660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.613668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.613834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.613993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.614003] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.614009] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.616442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.625755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.626316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.626347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.626355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.626521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.626674] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.626681] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.626686] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.629126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.638438] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.638933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.638948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.638953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.639110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.665 [2024-10-08 17:49:37.639261] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.665 [2024-10-08 17:49:37.639266] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.665 [2024-10-08 17:49:37.639272] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.665 [2024-10-08 17:49:37.641699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.665 [2024-10-08 17:49:37.651154] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.665 [2024-10-08 17:49:37.651715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.665 [2024-10-08 17:49:37.651745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.665 [2024-10-08 17:49:37.651754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.665 [2024-10-08 17:49:37.651920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.666 [2024-10-08 17:49:37.652080] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.666 [2024-10-08 17:49:37.652087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.666 [2024-10-08 17:49:37.652093] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.666 [2024-10-08 17:49:37.654526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.663833] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.664344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.664359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.664364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.664515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.664665] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.664671] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.664676] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.667108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.676565] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.677084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.677115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.677123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.677292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.677446] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.677452] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.677457] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.679895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.689204] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.689629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.689643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.689649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.689800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.689950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.689955] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.689960] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.692395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.701841] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.702392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.702422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.702431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.702602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.702756] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.702762] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.702768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.705205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.714511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.715025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.715046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.715053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.715210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.715361] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.715366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.715372] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.717804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.727250] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.727836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.727866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.727875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.728047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.728201] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.728207] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.728212] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.730645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.739955] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.740526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.740556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.740565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.740731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.740884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.740891] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.740899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.743339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.752649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.753103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.753133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.753142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.753310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.753463] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.753470] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.928 [2024-10-08 17:49:37.753475] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.928 [2024-10-08 17:49:37.755914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.928 [2024-10-08 17:49:37.765358] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.928 [2024-10-08 17:49:37.765908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.928 [2024-10-08 17:49:37.765937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.928 [2024-10-08 17:49:37.765946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.928 [2024-10-08 17:49:37.766120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.928 [2024-10-08 17:49:37.766273] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.928 [2024-10-08 17:49:37.766280] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.766285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.768717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.778030] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.778601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.778631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.778640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.778806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.778959] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.778965] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.778970] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.781410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.790711] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.791283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.791313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.791321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.791487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.791641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.791648] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.791653] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.794092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.803392] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.803858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.803873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.803879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.804035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.804186] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.804192] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.804197] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.806625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.816070] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.816650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.816680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.816689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.816855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.817016] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.817023] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.817029] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.819460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.828762] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.829179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.829209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.829218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.829386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.829543] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.829550] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.829555] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.831996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.841436] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.841988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.842017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.842026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.842195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.842347] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.842353] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.842359] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.844793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.854100] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.854668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.854698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.854706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.854872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.855034] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.855041] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.855047] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.857478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.866777] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.867331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.867361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.867370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.867536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.867689] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.867696] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.867701] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.870144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.879458] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.880043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.880073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.880082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.880251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.880404] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.880411] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.880417] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.882855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.892164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.892737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.892767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.892776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.892942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.929 [2024-10-08 17:49:37.893103] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.929 [2024-10-08 17:49:37.893110] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.929 [2024-10-08 17:49:37.893115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.929 [2024-10-08 17:49:37.895548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.929 [2024-10-08 17:49:37.904851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.929 [2024-10-08 17:49:37.905399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.929 [2024-10-08 17:49:37.905429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.929 [2024-10-08 17:49:37.905438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.929 [2024-10-08 17:49:37.905609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.930 [2024-10-08 17:49:37.905762] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.930 [2024-10-08 17:49:37.905768] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.930 [2024-10-08 17:49:37.905774] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.930 [2024-10-08 17:49:37.908211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.930 [2024-10-08 17:49:37.917511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.930 [2024-10-08 17:49:37.918074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.930 [2024-10-08 17:49:37.918104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:45.930 [2024-10-08 17:49:37.918116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:45.930 [2024-10-08 17:49:37.918285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:45.930 [2024-10-08 17:49:37.918438] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.930 [2024-10-08 17:49:37.918445] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.930 [2024-10-08 17:49:37.918450] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.191 [2024-10-08 17:49:37.920891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.191 [2024-10-08 17:49:37.930202] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.191 [2024-10-08 17:49:37.930768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.191 [2024-10-08 17:49:37.930798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.191 [2024-10-08 17:49:37.930807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.191 [2024-10-08 17:49:37.930973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.191 [2024-10-08 17:49:37.931134] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.191 [2024-10-08 17:49:37.931141] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.191 [2024-10-08 17:49:37.931146] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.191 [2024-10-08 17:49:37.933579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.191 [2024-10-08 17:49:37.942879] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.191 [2024-10-08 17:49:37.943437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.191 [2024-10-08 17:49:37.943467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.191 [2024-10-08 17:49:37.943476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.191 [2024-10-08 17:49:37.943644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.191 [2024-10-08 17:49:37.943797] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.191 [2024-10-08 17:49:37.943803] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.191 [2024-10-08 17:49:37.943810] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.191 [2024-10-08 17:49:37.946257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.191 4646.00 IOPS, 18.15 MiB/s [2024-10-08T15:49:38.183Z] [2024-10-08 17:49:37.955985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.191 [2024-10-08 17:49:37.956539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.191 [2024-10-08 17:49:37.956569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.191 [2024-10-08 17:49:37.956578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.191 [2024-10-08 17:49:37.956744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.191 [2024-10-08 17:49:37.956901] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.191 [2024-10-08 17:49:37.956907] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.191 [2024-10-08 17:49:37.956913] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.191 [2024-10-08 17:49:37.959353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.191 [2024-10-08 17:49:37.968658] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.191 [2024-10-08 17:49:37.969139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.191 [2024-10-08 17:49:37.969155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.191 [2024-10-08 17:49:37.969160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.191 [2024-10-08 17:49:37.969312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.191 [2024-10-08 17:49:37.969462] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.191 [2024-10-08 17:49:37.969467] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.191 [2024-10-08 17:49:37.969472] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.191 [2024-10-08 17:49:37.971900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:37.981353] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:37.981837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:37.981850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:37.981856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:37.982010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:37.982162] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:37.982167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:37.982172] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:37.984596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:37.994039] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:37.994491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:37.994503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:37.994509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:37.994659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:37.994809] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:37.994814] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:37.994819] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:37.997257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.006695] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.007253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.007283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.007292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.007458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.007612] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.007618] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.007623] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.010060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.019366] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.019943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.019979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.019989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.020157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.020311] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.020317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.020323] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.022755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.032063] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.032630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.032660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.032669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.032835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.032999] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.033006] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.033011] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.035444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.044755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.045371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.045402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.045413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.045580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.045733] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.045739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.045745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.048192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.057489] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.058076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.058106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.058115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.058284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.058437] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.058443] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.058449] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.060888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.070198] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.070761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.070792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.070801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.070967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.071124] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.071131] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.071137] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.073570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.082882] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.083489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.083520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.083529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.083695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.083849] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.083858] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.083864] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.086301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.095605] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.096100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.096130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.096139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.096307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.096461] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.096467] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.192 [2024-10-08 17:49:38.096473] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.192 [2024-10-08 17:49:38.098912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.192 [2024-10-08 17:49:38.108225] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.192 [2024-10-08 17:49:38.108703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.192 [2024-10-08 17:49:38.108718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.192 [2024-10-08 17:49:38.108724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.192 [2024-10-08 17:49:38.108874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.192 [2024-10-08 17:49:38.109029] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.192 [2024-10-08 17:49:38.109035] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.109041] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.111470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.193 [2024-10-08 17:49:38.120907] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.193 [2024-10-08 17:49:38.121529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.193 [2024-10-08 17:49:38.121558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.193 [2024-10-08 17:49:38.121567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.193 [2024-10-08 17:49:38.121733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.193 [2024-10-08 17:49:38.121886] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.193 [2024-10-08 17:49:38.121892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.121898] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.124340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.193 [2024-10-08 17:49:38.133510] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.193 [2024-10-08 17:49:38.133887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.193 [2024-10-08 17:49:38.133902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.193 [2024-10-08 17:49:38.133908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.193 [2024-10-08 17:49:38.134062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.193 [2024-10-08 17:49:38.134213] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.193 [2024-10-08 17:49:38.134219] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.134224] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.136649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.193 [2024-10-08 17:49:38.146239] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.193 [2024-10-08 17:49:38.146721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.193 [2024-10-08 17:49:38.146734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.193 [2024-10-08 17:49:38.146739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.193 [2024-10-08 17:49:38.146890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.193 [2024-10-08 17:49:38.147044] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.193 [2024-10-08 17:49:38.147050] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.147055] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.149483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.193 [2024-10-08 17:49:38.158919] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.193 [2024-10-08 17:49:38.159364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.193 [2024-10-08 17:49:38.159377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.193 [2024-10-08 17:49:38.159382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.193 [2024-10-08 17:49:38.159532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.193 [2024-10-08 17:49:38.159682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.193 [2024-10-08 17:49:38.159688] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.159693] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.162122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.193 [2024-10-08 17:49:38.171555] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.193 [2024-10-08 17:49:38.171880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.193 [2024-10-08 17:49:38.171894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.193 [2024-10-08 17:49:38.171899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.193 [2024-10-08 17:49:38.172060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.193 [2024-10-08 17:49:38.172210] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.193 [2024-10-08 17:49:38.172216] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.193 [2024-10-08 17:49:38.172221] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.193 [2024-10-08 17:49:38.174647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.462 [2024-10-08 17:49:38.184233] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.462 [2024-10-08 17:49:38.184718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.462 [2024-10-08 17:49:38.184730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.462 [2024-10-08 17:49:38.184735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.462 [2024-10-08 17:49:38.184885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.462 [2024-10-08 17:49:38.185040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.462 [2024-10-08 17:49:38.185046] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.462 [2024-10-08 17:49:38.185051] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.462 [2024-10-08 17:49:38.187477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.462 [2024-10-08 17:49:38.196911] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.462 [2024-10-08 17:49:38.197348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.462 [2024-10-08 17:49:38.197359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.462 [2024-10-08 17:49:38.197365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.462 [2024-10-08 17:49:38.197515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.462 [2024-10-08 17:49:38.197664] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.462 [2024-10-08 17:49:38.197670] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.197675] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.200106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.209539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.210174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.210204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.210213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.210379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.210532] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.210538] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.210550] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.212990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.222145] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.222755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.222786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.222795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.222961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.223122] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.223129] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.223135] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.225568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.234868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.235504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.235534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.235543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.235709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.235862] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.235868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.235874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.238311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.247478] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.248065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.248095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.248104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.248272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.248425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.248431] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.248437] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.250876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.260180] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.260809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.260839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.260848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.261022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.261176] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.261182] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.261187] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.263620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.272918] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.273489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.273519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.273528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.273694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.273847] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.273854] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.273859] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.276308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.285611] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.286294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.286324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.286333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.286499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.286652] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.286659] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.286664] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.289106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.298262] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.298833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.298863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.298872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.299046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.299203] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.299209] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.299215] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.301647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.310947] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.311520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.311551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.311560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.311726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.311879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.311885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.311891] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.314329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.323628] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.324239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.324269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.324277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.324443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.324597] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.463 [2024-10-08 17:49:38.324603] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.463 [2024-10-08 17:49:38.324608] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.463 [2024-10-08 17:49:38.327048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.463 [2024-10-08 17:49:38.336351] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.463 [2024-10-08 17:49:38.336918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.463 [2024-10-08 17:49:38.336948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.463 [2024-10-08 17:49:38.336956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.463 [2024-10-08 17:49:38.337134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.463 [2024-10-08 17:49:38.337288] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.337295] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.337300] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.339736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.349047] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.349622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.349652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.349661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.349827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.349989] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.349996] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.350001] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.352433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.361729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.362302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.362332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.362340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.362508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.362661] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.362667] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.362673] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.365111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.374414] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.374991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.375021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.375030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.375198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.375351] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.375357] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.375362] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.377797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.387105] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.387610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.387640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.387652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.387818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.387972] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.387989] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.387995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.390553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.399723] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.400311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.400341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.400350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.400517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.400670] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.400676] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.400681] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.403120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.412424] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.413015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.413045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.413053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.413222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.413376] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.413382] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.413388] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 [2024-10-08 17:49:38.415825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 [2024-10-08 17:49:38.425126] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 [2024-10-08 17:49:38.425462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.425477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.425482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 [2024-10-08 17:49:38.425633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 [2024-10-08 17:49:38.425787] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.425793] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.425798] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 555038 Killed "${NVMF_APP[@]}" "$@" 00:33:46.464 [2024-10-08 17:49:38.428232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=556738 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 556738 00:33:46.464 [2024-10-08 17:49:38.437822] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 556738 ']' 00:33:46.464 [2024-10-08 17:49:38.438253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.464 [2024-10-08 17:49:38.438266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.464 [2024-10-08 17:49:38.438272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.464 [2024-10-08 17:49:38.438422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.464 [2024-10-08 17:49:38.438572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.464 [2024-10-08 17:49:38.438579] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.464 [2024-10-08 17:49:38.438584] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.464 17:49:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.464 [2024-10-08 17:49:38.441017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.727 [2024-10-08 17:49:38.450484] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.727 [2024-10-08 17:49:38.450982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.727 [2024-10-08 17:49:38.450996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.727 [2024-10-08 17:49:38.451003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.727 [2024-10-08 17:49:38.451153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.727 [2024-10-08 17:49:38.451307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.727 [2024-10-08 17:49:38.451314] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.727 [2024-10-08 17:49:38.451320] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.727 [2024-10-08 17:49:38.453751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.727 [2024-10-08 17:49:38.463216] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.727 [2024-10-08 17:49:38.463661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.727 [2024-10-08 17:49:38.463673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.727 [2024-10-08 17:49:38.463678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.727 [2024-10-08 17:49:38.463829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.727 [2024-10-08 17:49:38.463984] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.727 [2024-10-08 17:49:38.463990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.727 [2024-10-08 17:49:38.463995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.727 [2024-10-08 17:49:38.466424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.727 [2024-10-08 17:49:38.475877] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.727 [2024-10-08 17:49:38.476357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.727 [2024-10-08 17:49:38.476369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.727 [2024-10-08 17:49:38.476374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.727 [2024-10-08 17:49:38.476524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.727 [2024-10-08 17:49:38.476674] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.727 [2024-10-08 17:49:38.476680] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.476685] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.479115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.488562] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.488890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.488901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.488906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.489061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.489211] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.489217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.489221] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.491653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.493645] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:33:46.728 [2024-10-08 17:49:38.493697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.728 [2024-10-08 17:49:38.501398] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.502000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.502031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.502040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.502210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.502363] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.502369] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.502375] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.504815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.514120] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.514701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.514731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.514740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.514907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.515066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.515074] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.515079] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.517510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.526819] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.527404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.527435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.527444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.527610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.527763] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.527770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.527775] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.530287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.539450] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.539931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.539946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.539951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.540106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.540258] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.540263] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.540268] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.542694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.552150] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.552720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.552750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.552759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.552926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.553085] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.553091] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.553097] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.555531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.564838] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.565300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.565315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.565321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.565473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.565623] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.565629] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.565634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.568066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.577511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.577981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.577994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.578003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.578154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.578305] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.578311] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.578316] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.579091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.728 [2024-10-08 17:49:38.580743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.590186] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.590696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.590709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.590715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.590866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.591020] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.591026] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.591031] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.593457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.602894] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.728 [2024-10-08 17:49:38.603360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.728 [2024-10-08 17:49:38.603373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.728 [2024-10-08 17:49:38.603379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.728 [2024-10-08 17:49:38.603529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.728 [2024-10-08 17:49:38.603680] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.728 [2024-10-08 17:49:38.603685] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.728 [2024-10-08 17:49:38.603690] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.728 [2024-10-08 17:49:38.606119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.728 [2024-10-08 17:49:38.615559] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.616182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.616215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.616224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.616395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.616549] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.616559] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.616565] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.619005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.628166] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.628741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.628772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.628781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.628948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.629109] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.629117] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.629122] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.631554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.631908] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.729 [2024-10-08 17:49:38.631934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.729 [2024-10-08 17:49:38.631941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.729 [2024-10-08 17:49:38.631947] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.729 [2024-10-08 17:49:38.631952] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.729 [2024-10-08 17:49:38.632821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.729 [2024-10-08 17:49:38.632970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.729 [2024-10-08 17:49:38.632972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.729 [2024-10-08 17:49:38.640868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.641341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.641373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.641382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.641552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.641706] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.641713] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.641719] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.644157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.653616] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.654217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.654249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.654262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.654430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.654583] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.654590] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.654596] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.657032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.666333] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.666913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.666943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.666953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.667128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.667282] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.667289] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.667295] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.669727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.679048] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.679658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.679689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.679698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.679865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.680025] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.680032] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.680038] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.682468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.691773] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.692313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.692343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.692352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.692520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.692673] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.692684] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.692689] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.695129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.704435] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.704939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.704954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.704960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.705115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.705267] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.705272] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.705278] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.729 [2024-10-08 17:49:38.707706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.729 [2024-10-08 17:49:38.717157] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.729 [2024-10-08 17:49:38.717616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.729 [2024-10-08 17:49:38.717629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.729 [2024-10-08 17:49:38.717634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.729 [2024-10-08 17:49:38.717785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.729 [2024-10-08 17:49:38.717936] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.729 [2024-10-08 17:49:38.717941] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.729 [2024-10-08 17:49:38.717946] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.720377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.729827] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.730447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.730478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.730488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.730658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.730811] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.730818] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.730824] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.733264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.742432] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.742898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.742912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.742918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.743073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.743225] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.743232] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.743238] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.745665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.755125] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.755569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.755600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.755609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.755776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.755930] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.755937] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.755943] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.758383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.767829] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.768454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.768484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.768493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.768660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.768814] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.768820] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.768825] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.771265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.780436] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.781019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.781050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.781059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.781231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.781385] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.781391] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.781397] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.783836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.793148] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.793579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.793610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.793619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.793788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.793941] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.793947] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.793952] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.991 [2024-10-08 17:49:38.796391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.991 [2024-10-08 17:49:38.805883] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.991 [2024-10-08 17:49:38.806357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.991 [2024-10-08 17:49:38.806372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.991 [2024-10-08 17:49:38.806378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.991 [2024-10-08 17:49:38.806529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.991 [2024-10-08 17:49:38.806679] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.991 [2024-10-08 17:49:38.806685] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.991 [2024-10-08 17:49:38.806690] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.809119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.818560] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.818882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.818894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.818899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.819054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.819205] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.819210] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.819222] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.821649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.831236] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.831693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.831706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.831711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.831862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.832016] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.832022] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.832027] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.834453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.843899] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.844470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.844501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.844509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.844676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.844829] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.844835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.844841] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.847279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.856595] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.856962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.856982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.856988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.857140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.857291] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.857297] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.857301] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.859729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.869328] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.869944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.869981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.869989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.870156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.870309] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.870316] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.870321] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.872751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.882068] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.882703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.882734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.882743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.882909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.883068] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.883075] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.883081] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.885512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.894679] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.895236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.895268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.895276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.895443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.895596] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.895603] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.895608] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.898047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.907354] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.907955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.907992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.908001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.908174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.908327] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.908333] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.908339] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.910771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.920088] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.920682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.920712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.920721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.920887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.921047] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.921054] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.921059] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.923491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.932801] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.933395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.933426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.933435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.933601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.933755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.992 [2024-10-08 17:49:38.933762] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.992 [2024-10-08 17:49:38.933767] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.992 [2024-10-08 17:49:38.936207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.992 [2024-10-08 17:49:38.945508] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.992 [2024-10-08 17:49:38.946051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.992 [2024-10-08 17:49:38.946082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.992 [2024-10-08 17:49:38.946091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.992 [2024-10-08 17:49:38.946257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.992 [2024-10-08 17:49:38.946411] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.993 [2024-10-08 17:49:38.946417] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.993 [2024-10-08 17:49:38.946426] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.993 [2024-10-08 17:49:38.948873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.993 3871.67 IOPS, 15.12 MiB/s [2024-10-08T15:49:38.985Z] [2024-10-08 17:49:38.958470] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.993 [2024-10-08 17:49:38.959073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.993 [2024-10-08 17:49:38.959104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.993 [2024-10-08 17:49:38.959112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.993 [2024-10-08 17:49:38.959281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.993 [2024-10-08 17:49:38.959435] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.993 [2024-10-08 17:49:38.959441] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.993 [2024-10-08 17:49:38.959447] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.993 [2024-10-08 17:49:38.961884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.993 [2024-10-08 17:49:38.971117] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.993 [2024-10-08 17:49:38.971701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.993 [2024-10-08 17:49:38.971731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:46.993 [2024-10-08 17:49:38.971740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:46.993 [2024-10-08 17:49:38.971907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:46.993 [2024-10-08 17:49:38.972066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.993 [2024-10-08 17:49:38.972073] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.993 [2024-10-08 17:49:38.972078] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.993 [2024-10-08 17:49:38.974510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.255 [2024-10-08 17:49:38.983833] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.255 [2024-10-08 17:49:38.984303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.255 [2024-10-08 17:49:38.984333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.255 [2024-10-08 17:49:38.984343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.255 [2024-10-08 17:49:38.984509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.255 [2024-10-08 17:49:38.984663] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.255 [2024-10-08 17:49:38.984669] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.255 [2024-10-08 17:49:38.984674] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.255 [2024-10-08 17:49:38.987114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.255 [2024-10-08 17:49:38.996568] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.255 [2024-10-08 17:49:38.997099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.255 [2024-10-08 17:49:38.997134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.255 [2024-10-08 17:49:38.997143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.255 [2024-10-08 17:49:38.997311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.255 [2024-10-08 17:49:38.997465] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.255 [2024-10-08 17:49:38.997472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.255 [2024-10-08 17:49:38.997477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.255 [2024-10-08 17:49:38.999915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.255 [2024-10-08 17:49:39.009229] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.255 [2024-10-08 17:49:39.009594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.009610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.009616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.009768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.009918] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.009924] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.009929] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.012363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.021954] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.022413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.022426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.022431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.022582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.022733] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.022739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.022744] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.025174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.034618] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.035075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.035088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.035093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.035244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.035398] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.035404] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.035409] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.037836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.047286] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.047913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.047943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.047952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.048124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.048278] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.048284] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.048290] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.050731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.059904] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.060495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.060526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.060535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.060702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.060856] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.060862] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.060868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.063307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.072610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.073124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.073140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.073145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.073296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.073447] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.073453] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.073458] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.075900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.085355] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.085859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.085872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.085877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.086031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.086181] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.086187] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.086192] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.088619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.098063] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.098407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.098419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.098424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.098574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.098724] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.098729] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.098734] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.101163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.110746] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.111081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.111093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.111098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.111248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.111398] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.111403] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.111408] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.113833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.123419] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.123731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.123745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.123753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.123904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.124059] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.124066] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.124072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.126500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.256 [2024-10-08 17:49:39.136083] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.256 [2024-10-08 17:49:39.136578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.256 [2024-10-08 17:49:39.136590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.256 [2024-10-08 17:49:39.136596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.256 [2024-10-08 17:49:39.136746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.256 [2024-10-08 17:49:39.136896] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.256 [2024-10-08 17:49:39.136902] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.256 [2024-10-08 17:49:39.136907] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.256 [2024-10-08 17:49:39.139337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.148782] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.149319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.149350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.149359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.149528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.149689] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.149696] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.149702] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.152141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.161458] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.161966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.161986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.161993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.162144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.162294] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.162305] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.162310] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.164740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.174189] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.174741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.174771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.174780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.174947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.175114] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.175121] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.175126] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.177559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.186874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.187517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.187548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.187556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.187723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.187877] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.187884] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.187890] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.190330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.199494] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.199970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.200006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.200015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.200182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.200336] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.200342] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.200348] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.202783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.212242] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.212829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.212860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.212870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.213042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.213196] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.213203] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.213208] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.215640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.224957] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.225509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.225540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.225549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.225715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.225869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.225875] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.225881] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.228319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.257 [2024-10-08 17:49:39.237633] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.257 [2024-10-08 17:49:39.238106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.257 [2024-10-08 17:49:39.238138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.257 [2024-10-08 17:49:39.238147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.257 [2024-10-08 17:49:39.238317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.257 [2024-10-08 17:49:39.238471] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.257 [2024-10-08 17:49:39.238477] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.257 [2024-10-08 17:49:39.238483] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.257 [2024-10-08 17:49:39.240922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.518 [2024-10-08 17:49:39.250389] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.518 [2024-10-08 17:49:39.250785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-10-08 17:49:39.250800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.518 [2024-10-08 17:49:39.250805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.518 [2024-10-08 17:49:39.250960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.518 [2024-10-08 17:49:39.251117] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.518 [2024-10-08 17:49:39.251123] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.518 [2024-10-08 17:49:39.251129] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.518 [2024-10-08 17:49:39.253557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.263009] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.263579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.263610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.263619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.263786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.263939] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.263945] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.263951] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.266391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.275708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.276097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.276112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.276118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.276269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.276419] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.276425] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.276430] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.278857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.519 [2024-10-08 17:49:39.288445] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.288902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.288915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.288926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.289081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.289232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.289237] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.289243] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.291669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.301117] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.301613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.301625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.301631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.301781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.301931] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.301937] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.301942] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.304373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.313819] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.314304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.314334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.314343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.314510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.314663] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.314670] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.314675] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.317114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.326425] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.519 [2024-10-08 17:49:39.326918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.326934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.326940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.327095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.519 [2024-10-08 17:49:39.327251] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.327258] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.327264] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.519 [2024-10-08 17:49:39.329694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 [2024-10-08 17:49:39.332844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.519 [2024-10-08 17:49:39.339143] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.519 [2024-10-08 17:49:39.339715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-10-08 17:49:39.339746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.519 [2024-10-08 17:49:39.339755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.519 [2024-10-08 17:49:39.339921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.519 [2024-10-08 17:49:39.340081] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.519 [2024-10-08 17:49:39.340087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.519 [2024-10-08 17:49:39.340093] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.519 [2024-10-08 17:49:39.342527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.519 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.520 [2024-10-08 17:49:39.351851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.520 [2024-10-08 17:49:39.352488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-10-08 17:49:39.352519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.520 [2024-10-08 17:49:39.352527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.520 [2024-10-08 17:49:39.352694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.520 [2024-10-08 17:49:39.352848] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.520 [2024-10-08 17:49:39.352855] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.520 [2024-10-08 17:49:39.352860] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.520 [2024-10-08 17:49:39.355301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.520 Malloc0 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.520 [2024-10-08 17:49:39.364466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.520 [2024-10-08 17:49:39.365072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-10-08 17:49:39.365104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.520 [2024-10-08 17:49:39.365113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.520 [2024-10-08 17:49:39.365279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.520 [2024-10-08 17:49:39.365433] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.520 [2024-10-08 17:49:39.365440] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.520 [2024-10-08 17:49:39.365446] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.520 [2024-10-08 17:49:39.367884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.520 [2024-10-08 17:49:39.377199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.520 [2024-10-08 17:49:39.377836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-10-08 17:49:39.377867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.520 [2024-10-08 17:49:39.377876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.520 [2024-10-08 17:49:39.378048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.520 [2024-10-08 17:49:39.378202] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.520 [2024-10-08 17:49:39.378209] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.520 [2024-10-08 17:49:39.378214] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.520 [2024-10-08 17:49:39.380646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.520 [2024-10-08 17:49:39.389810] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.520 [2024-10-08 17:49:39.390290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-10-08 17:49:39.390320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8100 with addr=10.0.0.2, port=4420 00:33:47.520 [2024-10-08 17:49:39.390329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8100 is same with the state(6) to be set 00:33:47.520 [2024-10-08 17:49:39.390497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8100 (9): Bad file descriptor 00:33:47.520 [2024-10-08 17:49:39.390650] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.520 [2024-10-08 17:49:39.390660] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.520 [2024-10-08 17:49:39.390666] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.520 [2024-10-08 17:49:39.393107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.520 [2024-10-08 17:49:39.395292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.520 17:49:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 555710 00:33:47.520 [2024-10-08 17:49:39.402558] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.781 [2024-10-08 17:49:39.558853] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:48.982 4086.00 IOPS, 15.96 MiB/s [2024-10-08T15:49:42.354Z] 5177.62 IOPS, 20.23 MiB/s [2024-10-08T15:49:43.295Z] 6039.44 IOPS, 23.59 MiB/s [2024-10-08T15:49:44.246Z] 6712.90 IOPS, 26.22 MiB/s [2024-10-08T15:49:45.185Z] 7283.91 IOPS, 28.45 MiB/s [2024-10-08T15:49:46.126Z] 7742.42 IOPS, 30.24 MiB/s [2024-10-08T15:49:47.067Z] 8137.31 IOPS, 31.79 MiB/s [2024-10-08T15:49:48.008Z] 8495.29 IOPS, 33.18 MiB/s 00:33:56.016 Latency(us) 00:33:56.016 [2024-10-08T15:49:48.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.016 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.016 Verification LBA range: start 0x0 length 0x4000 00:33:56.016 Nvme1n1 : 15.01 8790.10 34.34 14007.00 0.00 5596.96 552.96 18240.85 00:33:56.016 [2024-10-08T15:49:48.008Z] =================================================================================================================== 00:33:56.016 [2024-10-08T15:49:48.008Z] Total : 8790.10 34.34 14007.00 0.00 5596.96 552.96 18240.85 00:33:56.276 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.277 rmmod nvme_tcp 00:33:56.277 rmmod nvme_fabrics 00:33:56.277 rmmod nvme_keyring 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 556738 ']' 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 556738 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 556738 ']' 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 556738 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 556738 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 556738' 00:33:56.277 killing process with pid 556738 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 556738 00:33:56.277 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 556738 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.539 17:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.081 00:33:59.081 real 0m28.566s 00:33:59.081 user 1m3.772s 00:33:59.081 sys 0m7.835s 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.081 ************************************ 00:33:59.081 END TEST nvmf_bdevperf 00:33:59.081 ************************************ 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.081 ************************************ 00:33:59.081 START TEST nvmf_target_disconnect 00:33:59.081 ************************************ 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.081 * Looking for test storage... 00:33:59.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.081 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.082 --rc genhtml_branch_coverage=1 00:33:59.082 --rc genhtml_function_coverage=1 00:33:59.082 --rc genhtml_legend=1 00:33:59.082 --rc geninfo_all_blocks=1 00:33:59.082 --rc geninfo_unexecuted_blocks=1 00:33:59.082 00:33:59.082 ' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.082 --rc genhtml_branch_coverage=1 00:33:59.082 --rc genhtml_function_coverage=1 00:33:59.082 --rc genhtml_legend=1 00:33:59.082 --rc geninfo_all_blocks=1 00:33:59.082 --rc geninfo_unexecuted_blocks=1 00:33:59.082 00:33:59.082 ' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.082 --rc genhtml_branch_coverage=1 00:33:59.082 --rc genhtml_function_coverage=1 00:33:59.082 --rc genhtml_legend=1 00:33:59.082 --rc geninfo_all_blocks=1 00:33:59.082 --rc geninfo_unexecuted_blocks=1 00:33:59.082 00:33:59.082 ' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.082 --rc genhtml_branch_coverage=1 00:33:59.082 --rc genhtml_function_coverage=1 00:33:59.082 --rc genhtml_legend=1 00:33:59.082 --rc geninfo_all_blocks=1 00:33:59.082 --rc geninfo_unexecuted_blocks=1 00:33:59.082 00:33:59.082 ' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:59.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.082 17:49:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:07.221 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:07.221 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:07.221 Found net devices under 0000:31:00.0: cvl_0_0 00:34:07.221 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:07.222 Found net devices under 0000:31:00.1: cvl_0_1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:34:07.222 00:34:07.222 --- 10.0.0.2 ping statistics --- 00:34:07.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.222 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:34:07.222 00:34:07.222 --- 10.0.0.1 ping statistics --- 00:34:07.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.222 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:07.222 ************************************ 00:34:07.222 START TEST nvmf_target_disconnect_tc1 00:34:07.222 ************************************ 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:07.222 [2024-10-08 17:49:58.607971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.222 [2024-10-08 17:49:58.608045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad7dc0 with addr=10.0.0.2, port=4420 00:34:07.222 [2024-10-08 17:49:58.608080] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:07.222 [2024-10-08 17:49:58.608099] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:07.222 [2024-10-08 17:49:58.608108] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:07.222 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:07.222 Initializing NVMe Controllers 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.222 00:34:07.222 real 0m0.131s 00:34:07.222 user 0m0.062s 00:34:07.222 sys 0m0.068s 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:07.222 ************************************ 00:34:07.222 END TEST nvmf_target_disconnect_tc1 00:34:07.222 ************************************ 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:07.222 ************************************ 00:34:07.222 START TEST nvmf_target_disconnect_tc2 00:34:07.222 ************************************ 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.222 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=562998 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 562998 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 562998 ']' 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:07.223 17:49:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.223 [2024-10-08 17:49:58.768433] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:34:07.223 [2024-10-08 17:49:58.768492] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.223 [2024-10-08 17:49:58.860349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.223 [2024-10-08 17:49:58.954447] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.223 [2024-10-08 17:49:58.954511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.223 [2024-10-08 17:49:58.954519] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.223 [2024-10-08 17:49:58.954526] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.223 [2024-10-08 17:49:58.954533] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.223 [2024-10-08 17:49:58.956635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:07.223 [2024-10-08 17:49:58.956795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:07.223 [2024-10-08 17:49:58.956963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:07.223 [2024-10-08 17:49:58.956990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.793 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 Malloc0 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 [2024-10-08 17:49:59.668690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 [2024-10-08 17:49:59.709092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=563207 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:07.794 17:49:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.363 17:50:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 562998 00:34:10.363 17:50:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 [2024-10-08 17:50:01.747714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Write completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 [2024-10-08 17:50:01.748100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.363 starting I/O failed 00:34:10.363 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Write completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 Read completed with error (sct=0, sc=8) 00:34:10.364 starting I/O failed 00:34:10.364 [2024-10-08 17:50:01.748364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.364 [2024-10-08 17:50:01.748811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.748841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.749195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.749217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.749562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.749574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.749919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.749931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.750253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.750264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.750578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.750589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.750815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.750827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.751275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.751287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.751478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.751491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.751805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.751817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.752029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.752040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.752290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.752302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.752645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.752657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.753002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.753014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.753308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.753320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.753550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.753562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.753749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.753760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.754020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.754032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.754149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.754161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.754515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.754527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.754749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.754760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.755102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.755114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.755436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.755447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.755792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.755803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.756208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.756220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.756424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.756435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.756623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.756634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.756967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.756987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.757296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.757308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.757652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.757663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.758016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.758028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.758370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-10-08 17:50:01.758381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-10-08 17:50:01.758688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.758700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.758907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.758918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.759220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.759232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.759565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.759577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.759920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.759931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.760217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.760229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.760528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.760538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.760742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.760753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.761084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.761096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.761403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.761418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.761621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.761632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.761988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.762001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.762373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.762384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.762794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.762806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.762996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.763008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.763311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.763322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.764882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.764894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.765120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.765130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.765436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.765447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.765637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.765649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.765935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.765947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.766273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.766285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.766611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.766622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.766962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.766996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.767356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.767368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.767685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.767696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.768014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.768026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.768348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.768359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.768682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.768693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.769042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.769053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.769388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.769400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.769717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.769728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.770066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.770078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.770390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.770401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.770705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.770716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.771060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.771071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.771384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.771396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.771708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.771719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.772033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-10-08 17:50:01.772044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-10-08 17:50:01.772359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.772369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.772679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.772690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.773091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.773103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.773453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.773465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.773832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.773845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.774172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.774184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.774508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.774519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.774733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.774744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.775062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.775074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.775389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.775400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.775704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.775718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.775998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.776010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.776290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.776301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.776607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.776617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.776922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.776934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.777241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.777253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.777582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.777594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.777946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.777958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.778264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.778280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.778683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.778698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.779006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.779021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.779332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.779346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.779678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.779693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.780018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.780033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.780370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.780385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.780705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.780719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.781040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.781055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.781391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.781406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.781730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.781746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.782091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.782106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.782433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.782447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.782553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.782569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.782919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.782934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.783250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.783267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.783450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.783467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.783809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.783823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.784139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.784154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.784435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.784451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.784808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.784823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-10-08 17:50:01.785134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-10-08 17:50:01.785149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.785477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.785491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.785801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.785815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.786135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.786150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.786531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.786547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.786895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.786909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.787222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.787237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.787494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.787509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.787859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.787873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.788216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.788231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.788433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.788450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.788662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.788682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.788948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.788963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.789282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.789297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.789618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.789633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.789990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.790012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.790354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.790373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.790701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.790728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.791063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.791084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.791405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.791424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.791757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.791776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.792082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.792103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.792351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.792370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.792734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.792753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.793094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.793114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.793328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.793347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.793687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.793706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.794037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.794057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.794398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.794425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.794640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.794659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.794971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.795004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.795092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.795114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.795434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.795454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.795791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-10-08 17:50:01.795812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-10-08 17:50:01.796184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.796205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.796542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.796561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.796878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.796898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.797231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.797250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.797597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.797617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.797949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.797968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.798202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.798223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.798566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.798585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.798908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.798927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.799222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.799243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.799570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.799590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.799925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.799950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.800340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.800367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.800732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.800757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.801112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.801139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.801402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.801428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.801672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.801698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.802047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.802079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.802369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.802395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.802769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.802794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.803168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.803195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.803556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.803582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.803959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.803992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.804212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.804240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.804570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.804596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.804988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.805014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.805269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.805294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.805664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.805689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.806052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.806078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.806449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.806475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.806842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.806868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.807247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.807274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.807501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.807529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.807876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.807902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.808151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.808177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.808526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.808551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.808918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.808944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.809332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.809357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.809731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-10-08 17:50:01.809757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-10-08 17:50:01.810201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.810228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.810554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.810581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.810947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.810973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.811331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.811356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.811721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.811748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.812088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.812119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.812489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.812518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.812776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.812805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.813182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.813211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.813585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.813613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.813987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.814017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.814392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.814420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.814777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.814805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.815255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.815286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.815562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.815590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.815727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.815754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.816123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.816152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.816512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.816542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.816909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.816937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.817271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.817302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.817560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.817593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.817931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.817961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.818375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.818404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.818765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.818795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.819166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.819195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.819569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.819599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.819966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.820010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.820352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.820382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.820729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.820758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.820995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.821025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.821389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.821418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.821790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.821819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.822162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.822192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.822459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.822487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.822837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.822866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.823211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.823240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.823595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.823624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.823996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.824026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.824391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.824419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-10-08 17:50:01.824770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-10-08 17:50:01.824799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.825152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.825181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.825542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.825570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.825931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.825959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.826198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.826230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.826561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.826590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.826953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.827006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.827398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.827427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.827782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.827810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.828159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.828190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.828556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.828584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.828952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.828989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.829377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.829407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.829742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.829772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.830018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.830048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.830407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.830436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.830800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.830829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.831206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.831238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.831594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.831622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.832001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.832032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.832397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.832427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.832777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.832806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.833174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.833203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.833565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.833594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.833967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.834005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.834342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.834370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.834744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.834773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.834993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.835023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.835280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.835308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.835658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.835690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.836259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.836362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.836689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.836727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.837121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.837155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.837558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.837589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.837832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.837862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.838236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.838268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.838679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.838708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.839013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.839042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.839394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.839423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.839676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.839710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-10-08 17:50:01.840086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-10-08 17:50:01.840116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.840367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.840398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.840678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.840708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.840960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.841003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.841378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.841409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.841815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.841844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.842106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.842144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.842488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.842518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.842856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.842885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.843225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.843256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.843487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.843516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.843891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.843920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.844280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.844312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.844669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.844698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.845076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.845108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.845521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.845550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.845918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.845946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.846315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.846345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.846705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.846735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.847087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.847118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.847486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.847516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.847874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.847904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.848144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.848179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.848527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.848557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.848932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.848961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.849220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.849250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.849503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.849532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.849868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.849897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.850253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.850284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.850628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.850658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.850905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.850934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.851279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.851310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.851676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.851706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.852109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.852139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.852518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.852547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.852904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.852931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.853296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.853326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.853725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.853754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.854116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.854145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.854492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-10-08 17:50:01.854521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-10-08 17:50:01.854901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.854930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.855300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.855330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.855575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.855608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.855833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.855864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.856225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.856256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.856617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.856647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.856995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.857033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.857288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.857319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.857578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.857607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.857961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.858001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.858285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.858314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.858694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.858723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.858872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.858903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.859272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.859303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.859663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.859693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.860056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.860085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.860456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.860486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.860817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.860846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.861250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.861279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.861642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.861671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.862048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.862079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.862422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.862452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.862821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.862849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.863199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.863231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.863564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.863594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.863994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.864024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.864298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.864327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.864678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.864707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.865085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.865116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.865346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.865379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.865761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.865791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.866159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.866188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.866563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.866592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.866948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.867003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-10-08 17:50:01.867400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-10-08 17:50:01.867429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.867794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.867823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.868210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.868240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.868584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.868613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.868950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.868987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.869351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.869380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.869741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.869770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.870204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.870235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.870593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.870623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.870971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.871009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.871365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.871393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.871759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.871788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.872143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.872180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.872543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.872572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.872943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.872972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.873339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.873368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.873733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.873762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.874132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.874163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.874505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.874534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.874933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.874961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.875181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.875212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.875555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.875585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.875835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.875868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.876206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.876235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.876599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.876628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.876994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.877025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.877399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.877428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.877842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.877871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.878137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.878167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.878392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.878421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.878671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.878700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.879064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.879095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.879442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.879471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.879826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.879855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.880227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.880257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.880604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.880633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.880971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.881009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.881353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.881382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.881706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.881736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.882110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.882141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-10-08 17:50:01.882508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-10-08 17:50:01.882537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.882902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.882930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.883295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.883325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.883665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.883694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.884057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.884087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.884330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.884359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.884635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.884665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.884998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.885027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.885411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.885440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.885815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.885843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.886222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.886251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.886494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.886526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.886882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.886917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.887299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.887330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.887727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.887755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.888127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.888157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.888593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.888623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.888981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.889010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.889242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.889272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.889623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.889661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.890073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.890104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.890534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.890563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.890910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.890939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.891286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.891316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.891493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.891521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.891834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.891863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.892235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.892265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.892605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.892634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.893012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.893044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.893284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.893316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.893691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.893720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.894094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.894126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.894497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.894525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.894869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.894897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.895252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.895283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.895643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.895671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.896049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.896079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.896439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.896468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.896718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.896746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.897098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-10-08 17:50:01.897127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-10-08 17:50:01.897507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.897537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.897906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.897935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.898299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.898330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.898710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.898739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.899088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.899118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.899480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.899509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.899759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.899787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.900144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.900174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.900400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.900431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.900788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.900817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.901158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.901187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.901527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.901557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.901923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.901964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.902337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.902367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.902734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.902763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.903129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.903159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.903573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.903602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.903857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.903885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.904212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.904241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.904592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.904623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.904993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.905022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.905288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.905316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.905554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.905584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.905839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.905867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.906219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.906250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.906625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.906654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.907016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.907047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.907443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.907472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.907802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.907832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.908161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.908190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.908559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.908588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.908947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.908985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.909336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.909365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.909731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.909761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.910133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.910163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.910518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.910547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.910917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.910945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.911334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.911365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.911723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.911752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-10-08 17:50:01.912117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-10-08 17:50:01.912147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.912510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.912539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.912903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.912932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.913163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.913196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.913579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.913607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.913971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.914016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.914335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.914364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.914723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.914756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.915098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.915128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.915461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.915491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.915852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.915881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.916232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.916262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.916603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.916631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.916886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.916925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.917168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.917201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.917553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.917582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.917932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.917961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.918298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.918327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.918705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.918735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.918981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.919012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.919381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.919410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.919762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.919792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.920131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.920161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.920526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.920554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.920925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.920953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.921302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.921332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.921568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.921597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.921843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.921872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.922219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.922248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.922578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.922607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.922926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.922955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.923325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.923354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.923714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.923744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.924101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.924132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.924493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.924522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-10-08 17:50:01.924888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-10-08 17:50:01.924916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.925287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.925317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.925580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.925608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.926008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.926039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.926440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.926469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.926841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.926871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.927227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.927257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.927525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.927554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.927911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.927940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.928237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.928268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.928500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.928528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.928914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.928943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.929308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.929338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.929704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.929733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.930088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.930119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.930484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.930513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.930875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.930904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.931265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.931295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.931533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.931568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.931938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.931967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.932341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.932371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.932731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.932760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.933102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.933131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.933510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.933539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.933905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.933933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.934300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.934330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.934694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.934723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.935080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.935111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.935501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.935531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.935899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.935928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.936288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.936318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.936666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.936695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.937071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.937103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.937445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.937474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.937720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.937749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.938117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.938148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.938514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.938544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.938894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.938923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.939283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.939315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.939744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.939773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-10-08 17:50:01.940133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-10-08 17:50:01.940162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.940511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.940541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.940800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.940832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.941199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.941229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.941605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.941635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.941970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.942009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.942396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.942425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.942849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.942878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.943240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.943270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.943623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.943653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.943923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.943952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.944345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.944376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.944751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.944781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.945141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.945171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.945536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.945565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.945929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.945957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.946317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.946347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.946741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.946769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.947128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.947166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.947517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.947548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.947915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.947946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.948316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.948348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.948604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.948634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.949070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.949102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.949381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.949410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.949672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.949702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.949963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.950000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.950383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.950414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.950750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.950781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.951024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.951053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.951302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.951330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.951682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.951712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.952039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.952113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.952476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.952505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.952864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.952894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.953252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.953284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.953586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.953615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.953881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.953911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.954163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.954199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-10-08 17:50:01.954615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-10-08 17:50:01.954645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.955011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.955041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.955426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.955456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.955811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.955840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.956251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.956282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.956637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.956666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.957012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.957044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.957468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.957497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.957845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.957875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.958214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.958246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.958604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.958634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.959008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.959039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.959277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.959310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.959664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.959695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.960059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.960090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.960455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.960484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.960851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.960879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.961310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.961343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.961582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.961614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.962001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.962038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.962285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.962319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.962710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.962740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.963098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.963130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.963495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.963524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.963872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.963903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.964264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.964295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.964642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.964673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.965062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.965094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.965431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.965461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.965811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.965840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.966197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.966230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.966494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.966524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.966874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.966904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.967340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.967373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.967706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.967736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.968101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.968131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.968489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.968520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.968790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.968820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.969196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.969227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.969594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.969625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-10-08 17:50:01.970000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-10-08 17:50:01.970030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.970375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.970407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.970745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.970775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.971119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.971149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.971544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.971574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.971918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.971949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.972345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.972377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.972636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.972665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.973040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.973071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.973443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.973473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.973716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.973745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.974009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.974043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.974309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.974339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.974611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.974641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.975004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.975034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.975388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.975419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.975740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.975769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.976025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.976059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.976439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.976469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.976844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.976886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.977029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.977058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.977386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.977414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.977811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.977841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.978196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.978226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.978589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.978619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.978877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.978907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.979275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.979307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.979550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.979579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.979917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.979946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.980345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.980374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.980741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.980769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.981134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.981164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.981527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.981556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.981922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.981952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.982329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.982358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.380 qpair failed and we were unable to recover it. 00:34:10.380 [2024-10-08 17:50:01.982723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.380 [2024-10-08 17:50:01.982752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.982996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.983029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.983423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.983453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.983686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.983718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.983970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.984009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.984391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.984420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.984654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.984683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.985050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.985081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.985443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.985473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.985718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.985750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.986088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.986117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.986475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.986505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.986837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.986868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.987243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.987274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.987533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.987563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.987786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.987816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.988203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.988235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.988607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.988642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.988995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.989025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.989313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.989343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.989579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xece0f0 is same with the state(6) to be set 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Write completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 Read completed with error (sct=0, sc=8) 00:34:10.381 starting I/O failed 00:34:10.381 [2024-10-08 17:50:01.990603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:10.381 [2024-10-08 17:50:01.990884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.990938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.991360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.991390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.991647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.991676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.992043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.992073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.992456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.992485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.992847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.992876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.993218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.993248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.993635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.381 [2024-10-08 17:50:01.993664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.381 qpair failed and we were unable to recover it. 00:34:10.381 [2024-10-08 17:50:01.993907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.993938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.994213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.994244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.994474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.994503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.994749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.994778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.995157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.995188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.995555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.995584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.995936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.995966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.996380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.996410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.996816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.996844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.997257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.997286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.997676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.997705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.998089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.998119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.998513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.998541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.998919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.998947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.999359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.999388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.999638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.999667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:01.999941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:01.999970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.000321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.000351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.000692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.000722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.000969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.001008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.001416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.001446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.001795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.001823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.002096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.002126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.002504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.002533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.002954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.003006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.003337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.003367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.003744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.003772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.004135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.004166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.004518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.004547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.004887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.004924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.005293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.005323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.005686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.005717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.006053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.006086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.006349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.006379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.006753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.006782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.007037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.007067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.007473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.007503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.007752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.007780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.008154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.008184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.008559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.382 [2024-10-08 17:50:02.008590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.382 qpair failed and we were unable to recover it. 00:34:10.382 [2024-10-08 17:50:02.008843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.008872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.009177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.009207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.009567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.009596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.009851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.009880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.010270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.010302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.010579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.010608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.010969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.011020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.011293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.011322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.011672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.011701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.012071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.012103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.012437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.012466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.012708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.012736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.013105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.013135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.013520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.013548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.013912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.013941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.014098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.014129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.014465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.014494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.014883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.014912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.015307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.015338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.015581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.015611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.015848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.015877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.016231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.016264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.016645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.016677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.017036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.017066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.017304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.017332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.017701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.017730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.018006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.018036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.018403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.018432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.018783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.018821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.019156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.019194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.019617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.019648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.019883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.019914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.020345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.020376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.020715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.020756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.021098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.021129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.021480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.021510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.021880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.021909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.022269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.022298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.022703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.022732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.022955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.383 [2024-10-08 17:50:02.022991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.383 qpair failed and we were unable to recover it. 00:34:10.383 [2024-10-08 17:50:02.023225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.023253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.023617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.023647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.024012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.024043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.024412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.024441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.024811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.024840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.025066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.025098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.025473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.025504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.025991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.026021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.026272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.026303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.026692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.026723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.026964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.027017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.027418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.027448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.027891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.027921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.028306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.028337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.028695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.028724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.029132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.029164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.029527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.029556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.029918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.029947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.030352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.030383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.030533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.030564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.030994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.031026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.031434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.031465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.031822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.031854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.032227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.032259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.032601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.032629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.032867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.032897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.033109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.033141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.033369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.033398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.033796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.033826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.034049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.034088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.034509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.034539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.034859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.034887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.035219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.035250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.035610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.035641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.035914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.035944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.036213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.036244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.036628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.036658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.036872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.036902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.037153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.037187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.037417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.384 [2024-10-08 17:50:02.037447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.384 qpair failed and we were unable to recover it. 00:34:10.384 [2024-10-08 17:50:02.037806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.037837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.038044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.038074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.038475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.038504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.038877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.038906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.039357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.039387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.039738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.039768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.040154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.040185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.040534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.040564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.040939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.040969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.041358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.041388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.041598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.041626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.041996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.042026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.042286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.042315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.042694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.042723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.043103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.043133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.043514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.043546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.043909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.043939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.044312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.044344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.044711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.044740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.045179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.045212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.045576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.045605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.045992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.046024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.046395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.046424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.046772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.046808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.047184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.047214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.047472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.047501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.047744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.047772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.048156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.048186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.048409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.048442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.048812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.048847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.049098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.049131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.049505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.049541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.049793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.049827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.050263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.050293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.385 qpair failed and we were unable to recover it. 00:34:10.385 [2024-10-08 17:50:02.050551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.385 [2024-10-08 17:50:02.050580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.050927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.050957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.051229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.051258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.051659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.051688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.052041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.052073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.052422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.052452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.052831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.052860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.053107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.053139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.053486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.053515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.053895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.053926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.054227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.054258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.054472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.054501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.054874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.054903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.055326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.055357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.055714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.055744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.056090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.056120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.056447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.056478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.056852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.056880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.057222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.057253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.057460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.057490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.057717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.057745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.057980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.058011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.058249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.058278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.058639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.058667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.058923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.058952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.059312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.059342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.059706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.059736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.060102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.060133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.060482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.060512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.060876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.060904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.061252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.061291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.061699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.061730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.062141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.062172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.062526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.062554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.062914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.062943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.063375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.063411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.063754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.063785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.064129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.064158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.064367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.064395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.064765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.386 [2024-10-08 17:50:02.064794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.386 qpair failed and we were unable to recover it. 00:34:10.386 [2024-10-08 17:50:02.065161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.065192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.065632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.065661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.066018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.066049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.066430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.066459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.066819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.066848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.067205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.067235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.067656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.067686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.068053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.068083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.068487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.068515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.068869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.068899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.069244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.069275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.069519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.069550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.069892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.069923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.070313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.070345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.070579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.070611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.070969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.071010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.071345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.071374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.071731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.071760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.072138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.072168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.072406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.072435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.072832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.072861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.073110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.073140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.073400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.073429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.073796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.073824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.074199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.074229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.074621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.074649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.074901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.074930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.075316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.075346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.075715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.075743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.076096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.076125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.076364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.076393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.076775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.076804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.077157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.077187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.077554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.077583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.077928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.077958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.078311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.078349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.078698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.078727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.079110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.079139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.079513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.079542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.079902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.387 [2024-10-08 17:50:02.079931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.387 qpair failed and we were unable to recover it. 00:34:10.387 [2024-10-08 17:50:02.080370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.080401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.080656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.080685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.081027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.081057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.081423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.081452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.081698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.081726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.082087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.082118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.082412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.082441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.082796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.082825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.083160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.083191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.083440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.083473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.083833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.083863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.084210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.084240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.084604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.084633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.084939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.084967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.085200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.085233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.085571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.085603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.085950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.085987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.086347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.086376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.086744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.086773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.086937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.086969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.087361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.087391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.087726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.087756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.088123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.088155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.088521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.088550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.088914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.088942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.089313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.089344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.089707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.089736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.090100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.090129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.090496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.090525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.090902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.090931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.091292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.091321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.091683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.091711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.092190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.092220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.092574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.092604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.092968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.093010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.093415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.093451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.093796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.093824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.094227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.094258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.094599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.094633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.095006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.388 [2024-10-08 17:50:02.095037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.388 qpair failed and we were unable to recover it. 00:34:10.388 [2024-10-08 17:50:02.095293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.095325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.095689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.095718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.096136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.096167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.096601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.096631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.096994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.097025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.097370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.097401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.097761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.097790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.098014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.098047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.098409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.098438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.098800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.098829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.099172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.099202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.099623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.099652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.099998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.100027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.100347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.100377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.100747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.100776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.101133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.101163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.101525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.101554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.101916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.101946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.102317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.102347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.102710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.102738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.103130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.103160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.103524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.103552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.103922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.103952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.104318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.104349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.104628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.104656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.105017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.105047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.105424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.105454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.105865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.105895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.106225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.106255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.106511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.106539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.106889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.106919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.107280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.107311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.107677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.107706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.108051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.108080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.108436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.108475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.389 qpair failed and we were unable to recover it. 00:34:10.389 [2024-10-08 17:50:02.108839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.389 [2024-10-08 17:50:02.108875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.109131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.109160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.109518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.109547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.109903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.109932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.110287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.110316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.110683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.110713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.111076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.111109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.111468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.111497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.111872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.111902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.112288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.112318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.112689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.112718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.113085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.113114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.113378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.113407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.113763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.113791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.114155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.114185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.114550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.114579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.114939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.114968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.115233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.115263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.115614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.115643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.116025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.116055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.116413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.116444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.116880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.116910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.117290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.117320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.117684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.117713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.118101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.118131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.118513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.118542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.118899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.118929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.119274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.119307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.119647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.119676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.120058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.120088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.120358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.120388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.120739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.120768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.121146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.121176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.121582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.121612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.121990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.122022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.122380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.122411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.122745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.122774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.123147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.123177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.123541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.123570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.123932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.390 [2024-10-08 17:50:02.123960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.390 qpair failed and we were unable to recover it. 00:34:10.390 [2024-10-08 17:50:02.124341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.124376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.124549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.124580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.124967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.125005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.125347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.125385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.125743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.125772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.126127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.126158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.126519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.126548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.126907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.126938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.127334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.127366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.127735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.127766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.128105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.128136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.128497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.128526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.128878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.128907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.129279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.129309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.129664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.129694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.130057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.130086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.130482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.130510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.130867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.130896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.131261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.131290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.131661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.131690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.132056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.132086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.132449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.132477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.132850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.132879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.133101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.133134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.133394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.133426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.133656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.133690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.134119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.134150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.134398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.134430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.134649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.134680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.135005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.135036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.135396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.135426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.135762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.135793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.136157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.136187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.136554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.136584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.136957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.136994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.137249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.137278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.137641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.137670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.138032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.138063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.138422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.138452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.138813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.391 [2024-10-08 17:50:02.138843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.391 qpair failed and we were unable to recover it. 00:34:10.391 [2024-10-08 17:50:02.139205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.139241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.139558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.139586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.139926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.139956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.140313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.140343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.140704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.140733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.141111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.141141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.141375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.141406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.141760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.141789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.142146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.142177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.142549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.142579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.142960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.143009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.143473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.143502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.143827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.143856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.144199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.144230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.144638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.144669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.145030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.145061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.145447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.145476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.145837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.145865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.146221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.146250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.146607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.146637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.147005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.147036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.147382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.147413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.147783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.147812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.148166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.148196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.148518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.148546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.148913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.148943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.149315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.149345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.149702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.149732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.150092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.150123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.150482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.150511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.150773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.150801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.151160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.151190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.151566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.151596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.151946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.151983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.152366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.152396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.152763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.152791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.153157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.153186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.153546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.153575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.153821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.153852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.392 [2024-10-08 17:50:02.154244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.392 [2024-10-08 17:50:02.154274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.392 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.154648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.154685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.155034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.155064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.155414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.155444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.155805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.155835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.156199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.156229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.156574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.156603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.156998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.157028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.157274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.157305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.157657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.157685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.158040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.158071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.158441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.158470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.158838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.158867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.159089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.159121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.159397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.159425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.159760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.159789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.160150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.160181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.160545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.160575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.160941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.160970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.161353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.161382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.161792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.161820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.162146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.162176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.162546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.162574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.162942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.162971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.163341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.163371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.163702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.163731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.163859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.163891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.164233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.164264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.164622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.164657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.165009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.165039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.165414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.165442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.165791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.165820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.166203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.166232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.166597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.166625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.167014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.167046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.167400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.167430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.167805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.167834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.168070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.168102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.168471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.168500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.168839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.393 [2024-10-08 17:50:02.168869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.393 qpair failed and we were unable to recover it. 00:34:10.393 [2024-10-08 17:50:02.169218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.169247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.169611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.169642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.170048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.170079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.170445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.170473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.170836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.170866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.171229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.171259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.171495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.171526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.171963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.172021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.172347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.172382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.172760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.172788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.173134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.173163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.173525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.173554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.173789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.173819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.174088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.174118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.174469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.174497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.174870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.174900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.175237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.175268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.175632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.175662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.176057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.176088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.176454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.176485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.176817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.176846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.177196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.177227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.177558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.177586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.177955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.177991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.178367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.178396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.178751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.178780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.179145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.179176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.179537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.179566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.179942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.179986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.180239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.180267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.180614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.180642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.180902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.180931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.181296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.181326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.181696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.181724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.182101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.394 [2024-10-08 17:50:02.182130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.394 qpair failed and we were unable to recover it. 00:34:10.394 [2024-10-08 17:50:02.182550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.182578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.182919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.182949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.183276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.183305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.183672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.183702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.184040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.184070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.184429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.184460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.184809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.184839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.185200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.185231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.185601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.185630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.186001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.186031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.186301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.186329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.186698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.186727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.187088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.187118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.187484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.187514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.187886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.187914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.188368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.188398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.188762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.188799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.189165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.189194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.189556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.189585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.189953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.189990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.190228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.190260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.190508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.190539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.190984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.191016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.191400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.191428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.191792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.191822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.192202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.192234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.192658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.192686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.193045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.193075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.193420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.193449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.193818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.193847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.194118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.194148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.194535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.194565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.194937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.194965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.195323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.195359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.195600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.195629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.195987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.196016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.196420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.196449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.196809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.196838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.197200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.197230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.197582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.395 [2024-10-08 17:50:02.197611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.395 qpair failed and we were unable to recover it. 00:34:10.395 [2024-10-08 17:50:02.197951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.197988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.198356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.198385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.198754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.198783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.199174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.199204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.199575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.199604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.199841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.199872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.200039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.200069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.200449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.200478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.200847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.200878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.201259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.201289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.201645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.201673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.201900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.201931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.202324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.202355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.202725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.202756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.203129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.203159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.203509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.203538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.203909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.203937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.204269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.204299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.204672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.204701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.204989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.205019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.205400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.205431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.205791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.205820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.206199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.206229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.206553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.206583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.206942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.206970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.207351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.207380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.207740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.207774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.208119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.208149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.208405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.208433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.208780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.208809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.209058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.209091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.209466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.209496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.209705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.209737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.210095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.210132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.210497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.210526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.210754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.210786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.211167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.211197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.211559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.211587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.211952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.211991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.212349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.396 [2024-10-08 17:50:02.212378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.396 qpair failed and we were unable to recover it. 00:34:10.396 [2024-10-08 17:50:02.212614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.212646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.213004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.213034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.213406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.213435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.213805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.213833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.214193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.214222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.214656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.214686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.215012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.215043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.215438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.215467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.215826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.215854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.216226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.216256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.216596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.216624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.216989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.217020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.217382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.217411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.217777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.217806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.218197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.218226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.218506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.218534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.218902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.218931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.219299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.219329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.219691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.219719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.220086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.220116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.220336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.220367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.220743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.220772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.221147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.221177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.221536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.221565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.221815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.221846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.222226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.222256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.222621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.222650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.223001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.223030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.223260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.223292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.223648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.223677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.224057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.224089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.224410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.224439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.224802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.224831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.225200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.225236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.225498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.225526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.225894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.225923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.226266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.226295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.226656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.226684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.227123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.227153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.397 [2024-10-08 17:50:02.227520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.397 [2024-10-08 17:50:02.227550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.397 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.227909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.227938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.228323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.228353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.228713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.228742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.228992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.229026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.229430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.229459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.229817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.229847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.230107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.230137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.230517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.230546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.230794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.230821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.231199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.231229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.231591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.231619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.231990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.232019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.232362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.232391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.232749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.232777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.233145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.233174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.233533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.233562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.233933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.233962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.234315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.234345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.234692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.234721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.235090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.235120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.235489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.235519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.235752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.235783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.236150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.236180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.236418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.236449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.236774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.236805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.237164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.237194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.237558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.237587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.237841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.237870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.238237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.238268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.238641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.238671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.239032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.239062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.239444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.239473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.239840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.239869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.240235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.240271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.240629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.398 [2024-10-08 17:50:02.240657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.398 qpair failed and we were unable to recover it. 00:34:10.398 [2024-10-08 17:50:02.241016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.241046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.241405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.241435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.241792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.241820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.242157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.242186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.242431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.242463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.242696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.242724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.243090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.243120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.243487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.243517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.243882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.243911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.244173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.244202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.244583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.244613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.244963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.245001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.245381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.245411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.245772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.245802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.246155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.246185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.246549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.246577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.246837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.246865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.247236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.247266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.247629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.247660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.248027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.248057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.248432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.248461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.248810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.248838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.249242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.249271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.249643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.249671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.250031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.250060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.250304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.250338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.250695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.250726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.250876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.250908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.251281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.251310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.251666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.251695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.251995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.252024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.252382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.252411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.252798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.252827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.253128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.253158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.253527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.253556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.253788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.253816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.254094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.254126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.254375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.254405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.254643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.254680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.255057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.399 [2024-10-08 17:50:02.255087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.399 qpair failed and we were unable to recover it. 00:34:10.399 [2024-10-08 17:50:02.255443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.255473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.255834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.255863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.256229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.256260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.256499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.256527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.256953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.256990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.257336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.257366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.257715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.257745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.258025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.258057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.258211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.258240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.258499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.258527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.258895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.258925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.259303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.259333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.259676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.259705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.259954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.259991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.260269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.260298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.260524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.260553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.260836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.260865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.260993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.261024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.261286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.261318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.261683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.261713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.262091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.262121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.262490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.262520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.262897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.262926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.263220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.263249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.263643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.263673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.263899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.263929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.264300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.264330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.264703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.264732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.265102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.265131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.265398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.265427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.265780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.265820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.266067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.266098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.266475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.266505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.266879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.266909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.267255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.267286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.267671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.267700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.268071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.268100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.268468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.268499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.268909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.268944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.400 [2024-10-08 17:50:02.269282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.400 [2024-10-08 17:50:02.269312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.400 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.269557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.269589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.269926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.269954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.270258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.270288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.270653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.270683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.271051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.271082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.271464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.271492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.271870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.271898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.272270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.272300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.272550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.272579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.272935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.272966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.273223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.273253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.273621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.273650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.273886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.273916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.274167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.274199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.274568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.274597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.274982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.275012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.275385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.275414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.275767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.275796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.276011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.276040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.276311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.276341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.276740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.276770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.277144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.277174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.277521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.277552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.277901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.277930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.278302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.278332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.278711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.278741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.279106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.279136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.279369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.279400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.279615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.279646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.280004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.280034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.280327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.280356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.280712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.280742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.281094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.281124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.281499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.281528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.281901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.281931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.282192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.282222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.282573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.282601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.282991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.283022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.283392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.283427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.283787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.401 [2024-10-08 17:50:02.283817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.401 qpair failed and we were unable to recover it. 00:34:10.401 [2024-10-08 17:50:02.284196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.284226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.284606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.284635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.285001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.285032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.285385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.285413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.285782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.285811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.286183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.286214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.286592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.286622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.286994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.287025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.287371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.287400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.287746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.287776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.288218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.288250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.288594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.288625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.288966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.289002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.289401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.289430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.289866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.289895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.290258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.290287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.290661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.290691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.291057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.291088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.291514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.291542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.291885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.291912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.292302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.292332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.292706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.292736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.293099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.293129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.293469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.293498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.293872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.293901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.294277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.294309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.294673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.294702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.294938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.294966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.295320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.295350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.295602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.295630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.295997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.296028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.296359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.296387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.296746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.296774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.297122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.297152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.402 [2024-10-08 17:50:02.297525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.402 [2024-10-08 17:50:02.297554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.402 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.297863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.297892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.298235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.298266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.298487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.298518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.298896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.298930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.299296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.299326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.299564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.299593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.300022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.300053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.300341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.300370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.300725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.300754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.301112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.301141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.301505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.301533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.301643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.301674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.302108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.302138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.302500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.302529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.302896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.302925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.303285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.303315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.303652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.303681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.304040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.304072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.304444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.304474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.304833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.304862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.305215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.305244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.305621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.305649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.306017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.306049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.306404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.306434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.306796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.306826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.307119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.307148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.307415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.307447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.307806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.307835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.308049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.308082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.308481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.308512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.308884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.308921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.309266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.309297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.309640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.309668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.310006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.310037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.310382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.310413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.310758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.310788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.311147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.311178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.311550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.311579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.311952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.403 [2024-10-08 17:50:02.311990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.403 qpair failed and we were unable to recover it. 00:34:10.403 [2024-10-08 17:50:02.312347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.312376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.312683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.312713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.313078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.313107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.313473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.313504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.313828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.313863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.314224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.314254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.314612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.314642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.314987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.315017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.315380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.315409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.315775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.315804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.316053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.316083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.316451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.316480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.316833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.316862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.317084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.317114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.317486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.317516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.317879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.317909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.318148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.318181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.318512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.318543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.318875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.318905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.319276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.319307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.319659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.319690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.320035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.320066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.320437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.320465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.320815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.320844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.321203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.321234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.321586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.321615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.321993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.322024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.322393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.322423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.322672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.322703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.323084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.323114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.323492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.323520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.323886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.323915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.324286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.324317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.324659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.324689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.325056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.325087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.325415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.325443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.325778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.325807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.326185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.326216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.326583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.326613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.326983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.404 [2024-10-08 17:50:02.327013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.404 qpair failed and we were unable to recover it. 00:34:10.404 [2024-10-08 17:50:02.327367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.327397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.327755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.327784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.328128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.328159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.328524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.328553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.328915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.328952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.329324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.329355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.329603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.329634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.330044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.330075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.330434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.330462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.330804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.330832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.331158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.331187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.405 [2024-10-08 17:50:02.331598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.405 [2024-10-08 17:50:02.331628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.405 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.331954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.331994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.333860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.333925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.334307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.334344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.336114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.336172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.336477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.336511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.336894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.336923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.337262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.337294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.337636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.337665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.338032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.338064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-10-08 17:50:02.338481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-10-08 17:50:02.338510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.338878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.338908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.339266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.339296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.339721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.339749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.340081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.340112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.340459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.340487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.340848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.340876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.341228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.341258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.341619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.341649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.342018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.342049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.342475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.342504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.342744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.342776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.343173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.343204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.343570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.343600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.343965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.344001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.344339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.344368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.344738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.344768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.345124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.345157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.345393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.345423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.345802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.345833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.346206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.346237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.346533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.346561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.346917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.346945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.347194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.347232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.347600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.347631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.347971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.348009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.350226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.350295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.350736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.350773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.351136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.351167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.351416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.351445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.351790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.351826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.352141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.352172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.352541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.352571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.352947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.353000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.353359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.353391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.353747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.353783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.354108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.354139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.354477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.354507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.354870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.354900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.355247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-10-08 17:50:02.355278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-10-08 17:50:02.355649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.355679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.356043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.356073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.356451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.356479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.356839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.356868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.357210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.357242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.357611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.357640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.358007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.358038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.358374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.358402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.358833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.358862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.359196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.359228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.359630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.359661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.360017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.360048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.360411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.360440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.360779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.360808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.361182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.361213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.361564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.361593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.361963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.362002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.362354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.362383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.362710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.362738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.363033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.363063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.363431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.363459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.363769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.363798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.364142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.364172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.364546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.364576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.364945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.364982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.365318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.365347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.365717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.365746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.366105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.366135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.366385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.366414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.366756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.366784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.367150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.367181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.367521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.367550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.367919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.367948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.368179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.368211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.368593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.368623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.369068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.369101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.369473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.369502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.369869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.369900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.370147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.370180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-10-08 17:50:02.370436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-10-08 17:50:02.370474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.370720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.370749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.371136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.371166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.371402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.371432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.371798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.371826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.372071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.372100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.372470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.372499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.372845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.372875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.373213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.373243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.373618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.373646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.374012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.374042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.374388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.374424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.374787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.374819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.375204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.375234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.375469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.375501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.375854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.375884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.376228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.376258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.376469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.376501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.376860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.376888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.377011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.377045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.377268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.377300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.377636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.377667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.378008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.378038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.378381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.378412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.378669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.378696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.379071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.379101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.379464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.379492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.379852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.379882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.380237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.380268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.380628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.380657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.381018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.381048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.381425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.381454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.381809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.381839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.382272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.382304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.382636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.382665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.382902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.382932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.383317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.383348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.383712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.383743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.384004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.384034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.384377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.384405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-10-08 17:50:02.384771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-10-08 17:50:02.384800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.385133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.385163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.385538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.385566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.385926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.385955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.386338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.386369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.386760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.386789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.387141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.387171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.387537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.387566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.387931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.387959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.388339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.388369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.388703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.388733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.389077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.389120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.389455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.389484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.389832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.389861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.390208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.390238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.390601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.390630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.390994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.391025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.391290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.391322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.391704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.391732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.392099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.392130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.392497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.392525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.392885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.392915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.393291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.393322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.393682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.393711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.394084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.394114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.394470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.394499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.394858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.394887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.395239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.395269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.395677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.395707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.396065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.396095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.396461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.396489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.396854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.396882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.397240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.397271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.397617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.397646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.397989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.398027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.398381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.398410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.398818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.398846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.399225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.399256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.399441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.399474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.399808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.399838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-10-08 17:50:02.400199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-10-08 17:50:02.400230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.400583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.400613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.400865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.400894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.401248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.401279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.401617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.401646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.402021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.402051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.402423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.402451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.402792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.402822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.403145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.403175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.403519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.403548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.403918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.403948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.404299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.404335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.404703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.404732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.405097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.405127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.405481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.405510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.405884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.405912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.406276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.406306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.406673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.406701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.407047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.407078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.407424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.407462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.407788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.407819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.408245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.408276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.408616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.408647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.408994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.409024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.409391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.409420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.409782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.409812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.410197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.410228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.410443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.410473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.410829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.410858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.411201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.411233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.411572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.411601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.411860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.411892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.412306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.412337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.412689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.412719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-10-08 17:50:02.413060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-10-08 17:50:02.413090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.413452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.413481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.413852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.413880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.414248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.414279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.414541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.414570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.414961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.414999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.415317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.415347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.415645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.415674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.416043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.416073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.416394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.416423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.416782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.416811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.417158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.417187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.417553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.417582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.418028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.418058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.418421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.418451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.418816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.418845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.419243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.419273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.419633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.419668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.420019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.420050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.420431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.420460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.420706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.420735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.421088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.421119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.421467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.421497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.421782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.421811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.422096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.422126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.422474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.422503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.422866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.422895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.423267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.423297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.423732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.423762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.424096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.424126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.424351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.424380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.424751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.424781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.425148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.425178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.425490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.425520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.425938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.425966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.426335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.426365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.426729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.426757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.427102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.427131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.427501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.427530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.427757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.427787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.428162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-10-08 17:50:02.428194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-10-08 17:50:02.428570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.428600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.428965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.429002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.429349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.429378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.429822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.429852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.430206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.430236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.430610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.430639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.431003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.431033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.431391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.431420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.431643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.431675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.431956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.431994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.432318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.432346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.432708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.432738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.433011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.433042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.433300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.433329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.433702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.433730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.434076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.434106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.434478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.434513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.434882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.434912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.435254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.435285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.435646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.435675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.436038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.436067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.436449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.436478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.436853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.436883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.437239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.437269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.437623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.437652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.438013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.438042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.438408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.438437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.438806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.438835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.439217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.439247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.439585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.439614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.439861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.439892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.440264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.440294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.440664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.440692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.441060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.441088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.441472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.441501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.441766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.441795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.442148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.442178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.442553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.442581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.442930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.442959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.443323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-10-08 17:50:02.443353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-10-08 17:50:02.443591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.443621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.444003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.444034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.444347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.444376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.444743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.444775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.445032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.445062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.445408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.445437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.445800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.445829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.446201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.446230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.446591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.446621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.446993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.447024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.447380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.447408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.447770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.447799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.448171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.448201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.448554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.448584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.448953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.448991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.449354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.449383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.449743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.449778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.450145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.450175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.450587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.450616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.450863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.450894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.451287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.451319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.451674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.451702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.452142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.452171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.452531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.452560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.452989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.453019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.453322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.453351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.453701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.453730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.454095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.454127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.454475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.454503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.454899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.454927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.455304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.455335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.455706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.455736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.456094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.456125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.456497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.456526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.456922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.456952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.457301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.457331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.457660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.457690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.458047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.458078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.458451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.458480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.458838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-10-08 17:50:02.458867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-10-08 17:50:02.459230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.459260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.459628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.459658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.460010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.460041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.460462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.460492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.460834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.460862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.461233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.461264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.461505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.461533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.461898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.461926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.462288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.462318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.462687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.462716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.463084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.463114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.463489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.463518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.463810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.463840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.464200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.464230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.464594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.464623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.464988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.465018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.465363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.465398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.465770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.465797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.466165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.466195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.466427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.466458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.466819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.466850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.467246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.467278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.467536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.467564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.467921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.467950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.468290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.468320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.468664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.468693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.468943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.468972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.469324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.469353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.469717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.469747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.470108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.470138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.470347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.470375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.470750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.470778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.471150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.471180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.471613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.471643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.471999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-10-08 17:50:02.472029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-10-08 17:50:02.472427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.472455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.472785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.472814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.473176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.473206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.473560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.473588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.473948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.473984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.474321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.474351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.474690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.474718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.475087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.475117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.475360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.475393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.475750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.475780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.476145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.476177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.476514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.476543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.476907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.476936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.477298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.477329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.477658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.477688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.478051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.478082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.478459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.478488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.478858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.478886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.479120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.479154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.479393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.479423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.479721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.479750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.480106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.480142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.480514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.480543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.480896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.480924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.481279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.481310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.481669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.481697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.481995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.482025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.482373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.482403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.482733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.482762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.483129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.483160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.483413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.483442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.483796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.483825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.484168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.484200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.484573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.484602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.484972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.485028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.485277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.485310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.485554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.485585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.485964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.486002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.486355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.486385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.486753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-10-08 17:50:02.486782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-10-08 17:50:02.487123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.487154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.487485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.487513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.487800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.487830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.488203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.488234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.488603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.488632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.489042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.489071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.489395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.489424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.489775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.489804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.490143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.490174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.490611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.490640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.490961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.491010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.491388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.491417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.491647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.491680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.492040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.492072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.492429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.492458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.492866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.492895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.493241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.493273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.493645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.493675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.494043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.494074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.494450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.494479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.494838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.494867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.495235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.495271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.495625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.495654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.496099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.496130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.496499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.496528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.496882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.496911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.497234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.497265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.497632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.497661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.498030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.498060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.498419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.498448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.498802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.498832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.499196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.499227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.499582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.499611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.499869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.499897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.500302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.500332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.500616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.500645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.501003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.501034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.501404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.501433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.501782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.501810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.502184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-10-08 17:50:02.502214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-10-08 17:50:02.502579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.502608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.502969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.503015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.503316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.503346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.503720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.503750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.504113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.504143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.504508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.504537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.504909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.504938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.505291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.505322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.505687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.505717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.505953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.505992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.506169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.506200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.506593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.506622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.506964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.507001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.507278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.507307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.507688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.507719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.507931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.507959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.508123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.508155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.508388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.508419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.508768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.508797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.509181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.509212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.509465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.509494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.509729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.509764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.510062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.510092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.510458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.510487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.510722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.510751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.511126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.511156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.511514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.511542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.511908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.511937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.512310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.512341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.512721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.512750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.513115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.513145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.513383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.513412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.513642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.513675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.514055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.514086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.514415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.514445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.514814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.514845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.515201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.515234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.515476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.515506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.515732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.515765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.516005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-10-08 17:50:02.516037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-10-08 17:50:02.516404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.516434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.516756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.516789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.517042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.517072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.517326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.517354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.517676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.517705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.517963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.517999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.518391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.518420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.518778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.518809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.519165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.519196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.519420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.519451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.519820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.519849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.520215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.520246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.520614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.520643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.520853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.520882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.521265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.521295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.521669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.521699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.522066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.522096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.522471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.522499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.522734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.522763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.522994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.523025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.523299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.523328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.523763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.523798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.524200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.524230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.524599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.524627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.524897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.524924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.525189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.525219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.525488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.525518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.525914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.525944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.526384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.526415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.526788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.526816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.527175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.527204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.527571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.527600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.527818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.527848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.528141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.528170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.528403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.528432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-10-08 17:50:02.528677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-10-08 17:50:02.528707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.529119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.529150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.529338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.529367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.529624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.529652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.529824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.529852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.530291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.530321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.530666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.530696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.531068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.531099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.531458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.531489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.531735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.531770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.531999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.532032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.532303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.532332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.532701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.532730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.533071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.533103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.533338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.533366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.533588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.533618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.533862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.533891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.534311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.534341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.534539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.534570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.534956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.534994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.535332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.535361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.535731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.535760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.536119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.536150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.536577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.536606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.536827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.536855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.537258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.537288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.537657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.537692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.538063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.538093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.538491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.538519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.538757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.538786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.539124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.539155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.539510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.539540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.539906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.539935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.540292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.540322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.540670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.540698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.541063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.541095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.541468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.541497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.541875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.541904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.542191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.542221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.542583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.542613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-10-08 17:50:02.542994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-10-08 17:50:02.543024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.543379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.543408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.543655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.543688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.544073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.544104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.544491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.544520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.544855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.544893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.545118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.545148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.545395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.545426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.545700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.545732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.546103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.546133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.546507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.546537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.546956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.546996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.547250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.547279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.547669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.547698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.548054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.548086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.548426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.548456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.548748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.548777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.549152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.549184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.549542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.549571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.549938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.549966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.550346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.550375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.550639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.550668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.551019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.551048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.551451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.551480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.551885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.551914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.552273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.552302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.552674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.552704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.553077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.553108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.553366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.553395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.553753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.553782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.554153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.554184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.554398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.554430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.554852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.554882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.555258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.555296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.555542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.555571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.555913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.555942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.556334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.556363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.556729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.556758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.557128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.557158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.557507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.557536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-10-08 17:50:02.557895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-10-08 17:50:02.557926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.558292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.558321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.558690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.558719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.559008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.559038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.559416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.559445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.559820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.559849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.560215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.560244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.560600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.560629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.560993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.561023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.561427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.561458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.561831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.561860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.562210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.562240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.562608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.562637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.562999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.563034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.563362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.563391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.563769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.563799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.564067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.564098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.564457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.564485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.564711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.564741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.565115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.565146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.565523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.565551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.565912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.565941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.566382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.566413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.566747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.566777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.567123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.567153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.567521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.567550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.567798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.567830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.568232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.568263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.568621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.568650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.569011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.569040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.569418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.569446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.569789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.569818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.570185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.570215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.570581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.570611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.570969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.571009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.571374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.571403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.571783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.571811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.572073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.572102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.572453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.572480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.572840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.572870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-10-08 17:50:02.573239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-10-08 17:50:02.573271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.573626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.573655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.574023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.574052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.574309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.574339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.574702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.574733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.575072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.575102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.575354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.575386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.575758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.575787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.576063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.576092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.576467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.576496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.576865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.576895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.577012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.577042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.577422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.577452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.577682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.577720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.578154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.578184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.578543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.578572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.578998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.579027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.579385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.579415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.579777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.579805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.580167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.580198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.580562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.580590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.580955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.581002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.581352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.581382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.581752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.581781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.582152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.582182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.582547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.582576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.582801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.582831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.583186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.583216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.583578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.583608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.583969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.584006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.584365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.584401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.584728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.584756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.585111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.585141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.585363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.585393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.585772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.585801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.586046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.586075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.586438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.586468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.586765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.586793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.587141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.587170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.587578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.587607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-10-08 17:50:02.587971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-10-08 17:50:02.588029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.588382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.588412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.588780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.588809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.589178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.589208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.589466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.589497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.589862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.589891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.590228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.590259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.590618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.590646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.591016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.591046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.591458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.591487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.591724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.591752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.592089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.592118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.592466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.592494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.592862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.592898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.593250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.593280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.593652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.593681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.594050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.594079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.594461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.594489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.594825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.594855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.595205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.595236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.595597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.595626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.595995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.596026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.596379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.596407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.596781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.596811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.597195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.597224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.597589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.597619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.598006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.598036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.598339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.598369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.598761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.598790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.599147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.599176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.599544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.599573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.599934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.599963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.600341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.600370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.600725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.600754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.601016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.601045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-10-08 17:50:02.601424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-10-08 17:50:02.601453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.601824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.601853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.602209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.602241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.602605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.602633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.603040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.603070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.603335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.603367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.603596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.603628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.604017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.604047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.604396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.604425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.604829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.604858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.605268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.605297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.605652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.605682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.606056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.606085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.606420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.606450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.606818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.606847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.607067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.607098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.607473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.607502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.607871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.607900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.608343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.608378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.608734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.608763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.609140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.609171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.609533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.609561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.609925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.609952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.610323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.610353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.610725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.610754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.611131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.611162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.611529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.611558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.611923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.611953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.612387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.612416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.612780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.612810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.613178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.613208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.613566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.613595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.613955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.613991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.614371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.614403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.614677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.614706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.615100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.615129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.615483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.615513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.615762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.615791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.616249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.616278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.616637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-10-08 17:50:02.616665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-10-08 17:50:02.617032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.617062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.617419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.617454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.617824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.617855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.618193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.618224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.618568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.618596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.618955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.618993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.619328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.619357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.619716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.619744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.619991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.620021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.620283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.620313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.620691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.620720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.621007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.621038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.621387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.621416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.621775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.621803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.622189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.622227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.622549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.622578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.622946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.622982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.623343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.623371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.623768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.623802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.624201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.624230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.624570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.624598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.624903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.624939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.625334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.625365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.625723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.625751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.626018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.626048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.626411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.626439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.626733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.626760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.627111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.627141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.627389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.627417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.627751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.627779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.628135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.628164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.628534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.628562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.628927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.628956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.629358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.629389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.629755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.629784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.630137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.630167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.630422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.630455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.630802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.630833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.631205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.631235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.631606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.631635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-10-08 17:50:02.632004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-10-08 17:50:02.632033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.632420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.632449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.632817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.632846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.633220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.633249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.633482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.633513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.633888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.633918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.634272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.634301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.634662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.634692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.635027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.635057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.635421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.635449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.635814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.635843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.636190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.636219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.636583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.636612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.636957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.637006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.637356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.637384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.637747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.637777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.638004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.638036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.638258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.638288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.638525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.638564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.638945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.638981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.639346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.639375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.639750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.639780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.640121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.640153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.640517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.640546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.640908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.640937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.641310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.641341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.641766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.641794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.642163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.642192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.642553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.642583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.642921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.642949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.643334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.643363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.643721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.643750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.644095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.644125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.644538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.644567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.644944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.644988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.645264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.645294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.645620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.645648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.646006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.646037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.646404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.646433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.646789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.646818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-10-08 17:50:02.647157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-10-08 17:50:02.647188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.647537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.647566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.647920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.647948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.648310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.648338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.648675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.648703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.648933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.648963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.649315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.649344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.649695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.649724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.650054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.650083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.650327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.650355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.650593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.650623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.651005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.651035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.651379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.651409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.651756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.651784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.652116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.652148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.652396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.652427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.652797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.652826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.653195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.653225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.653591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.653627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.653861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.653891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.654242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.654273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.654634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.654664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.654911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.654940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.655311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.655341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.655777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.655808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.656166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.656196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.656557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.656586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.656828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.656859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.657257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.657287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.657653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.657681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.657927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.657959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-10-08 17:50:02.658217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-10-08 17:50:02.658248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.660693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.660761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.661237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.661274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.661527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.661559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.661924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.661955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.662329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.662359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.662727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.662755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.663107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.663138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.663507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.663537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.663877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.663906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.664270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.664300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.664740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.664768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.665121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.665151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.665592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.665622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.665999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.666030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.666427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.666456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.666814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.666843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.667199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.667229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.976 qpair failed and we were unable to recover it. 00:34:10.976 [2024-10-08 17:50:02.667596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.976 [2024-10-08 17:50:02.667625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.667998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.668028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.668384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.668413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.668685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.668715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.669090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.669120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.669367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.669395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.669743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.669771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.670137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.670168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.670532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.670562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.670920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.670955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.671324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.671353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.671589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.671617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.672001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.672031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.672391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.672420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.672782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.672812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.673144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.673174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.673547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.673576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.673941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.673970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.674219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.674253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.674636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.674666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.675041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.675073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.675324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.675353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.675587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.675615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.675992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.676022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.676369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.676399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.676770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.676800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.677144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.677174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.679413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.679480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.679916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.679951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.680227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.977 [2024-10-08 17:50:02.680261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.977 qpair failed and we were unable to recover it. 00:34:10.977 [2024-10-08 17:50:02.680615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.680644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.681002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.681034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.681401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.681430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.681764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.681794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.682160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.682191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.682551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.682579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.682937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.682966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.683300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.683329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.683553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.683584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.683942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.683973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.684331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.684359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.684723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.684751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.684995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.685029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.685397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.685425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.685774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.685802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.686258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.686288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.686665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.686695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.687051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.687081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.687475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.687503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.687876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.687910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.688278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.688308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.688667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.688697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.689077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.689107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.689469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.689497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.689841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.689870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.690289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.690318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.690662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.690692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.691058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.691089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.691447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.978 [2024-10-08 17:50:02.691476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.978 qpair failed and we were unable to recover it. 00:34:10.978 [2024-10-08 17:50:02.691830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.691858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.692202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.692232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.692598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.692627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.693003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.693033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.693303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.693335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.693693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.693723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.693960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.694019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.694408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.694436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.694812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.694842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.695207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.695237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.695618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.695647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.696040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.696074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.696403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.696431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.696804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.696833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.697165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.697195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.697454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.697485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.697848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.697877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.698244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.698277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.698659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.698691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.699025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.699054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.699439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.699466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.699641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.699672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.700044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.700075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.700339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.700368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.700723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.700751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.701006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.701039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.701410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.701440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.701809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.701837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.702203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.702234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.979 qpair failed and we were unable to recover it. 00:34:10.979 [2024-10-08 17:50:02.702593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.979 [2024-10-08 17:50:02.702623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.702989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.703026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.703395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.703424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.703787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.703816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.704135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.704165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.704512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.704542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.704910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.704939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.705310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.705340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.705709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.705738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.706082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.706111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.706476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.706505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.706871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.706900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.707266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.707295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.707669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.707697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.708052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.708082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.708326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.708359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.708606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.708635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.708911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.708940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.709351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.709382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.709787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.709816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.710046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.710089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.710432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.710461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.710825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.710856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.711190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.711221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.711582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.711611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.711982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.712012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.712297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.712326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.712677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.712705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.713058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.713090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.980 qpair failed and we were unable to recover it. 00:34:10.980 [2024-10-08 17:50:02.713450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.980 [2024-10-08 17:50:02.713479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.713851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.713880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.714242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.714271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.714597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.714625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.714996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.715026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.715387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.715415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.715791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.715819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.716091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.716121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.716468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.716497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.716855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.716884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.717246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.717276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.717425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.717456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.717832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.717868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.718197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.718228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.718566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.718596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.718965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.719003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.719346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.719374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.719738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.719768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.720121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.720151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.720518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.720547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.720895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.720924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.721265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.721296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.721547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.721575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.721921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.721951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.722318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.722349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.722713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.722741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.723097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.723127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.723493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.723521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.723885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.723912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.981 qpair failed and we were unable to recover it. 00:34:10.981 [2024-10-08 17:50:02.724285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.981 [2024-10-08 17:50:02.724314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.724689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.724719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.724964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.725036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.725286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.725319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.725659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.725690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.726030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.726059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.726471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.726499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.726743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.726775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.727133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.727163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.727472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.727508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.727919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.727948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.728206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.728234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.728601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.728630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.728859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.728892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.729148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.729179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.729548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.729578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.729953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.729993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.730350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.730380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.730740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.730770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.731131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.731162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.733498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.733563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.733998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.734035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.734413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.734443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.734815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.982 [2024-10-08 17:50:02.734854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.982 qpair failed and we were unable to recover it. 00:34:10.982 [2024-10-08 17:50:02.735093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.735127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.735531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.735560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.735916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.735945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.736386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.736416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.736752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.736780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.737137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.737167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.737529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.737557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.737927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.737956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.738307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.738337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.738704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.738733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.739105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.739135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.739487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.739515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.739865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.739893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.740233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.740265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.740615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.740644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.740955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.740995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.741244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.741273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.741507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.741538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.741866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.741895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.742238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.742268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.742480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.742512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.742876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.742905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.743161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.743190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.743545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.743573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.743940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.743969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.744367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.744397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.744692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.744722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.745054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.745085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.745442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.745472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.745738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.983 [2024-10-08 17:50:02.745766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.983 qpair failed and we were unable to recover it. 00:34:10.983 [2024-10-08 17:50:02.746122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.746153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.746519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.746548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.746904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.746934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.747408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.747442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.747791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.747822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.748166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.748195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.748569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.748598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.748899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.748928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.749190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.749221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.749686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.749716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.749957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.750001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.750408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.750438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.750588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.750622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.750884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.750916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.751237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.751268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.751512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.751543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.751911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.751942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.752305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.752337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.752600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.752630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.753011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.753043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.753407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.753438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.753807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.753837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.754086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.754116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.754357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.754388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.754745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.754775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.755143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.755174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.755538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.755569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.755934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.755964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.756127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.756157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.984 [2024-10-08 17:50:02.756440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.984 [2024-10-08 17:50:02.756470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.984 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.756832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.756861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.757243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.757274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.757651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.757681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.757916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.757948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.758390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.758421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.758780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.758810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.759220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.759257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.759613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.759643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.759798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.759828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.760077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.760108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.760361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.760393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.760639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.760672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.761038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.761070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.761421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.761451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.761815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.761844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.762254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.762286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.762646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.762676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.762927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.762956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.763351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.763381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.763746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.763774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.764147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.764177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.764544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.764573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.764941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.764971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.765294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.765323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.765699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.765727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.766099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.766129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.766510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.766539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.766783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.766811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.767188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.985 [2024-10-08 17:50:02.767219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.985 qpair failed and we were unable to recover it. 00:34:10.985 [2024-10-08 17:50:02.767580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.767609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.767995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.768024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.768360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.768389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.768630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.768659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.769051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.769087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.769309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.769336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.769711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.769740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.770093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.770125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.770484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.770515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.770764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.770795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.770906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.770935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.771524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.771646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.772262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.772369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.772687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.772725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.773226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.773333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.773633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.773674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.774187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.774291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.774623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.774677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.775049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.775083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.775446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.775477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.775704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.775737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.776004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.776035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.776473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.776502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.776877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.776907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.777124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.777155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.777516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.777546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.777844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.777874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.778226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.778258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.778393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.778426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.778760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.986 [2024-10-08 17:50:02.778790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.986 qpair failed and we were unable to recover it. 00:34:10.986 [2024-10-08 17:50:02.779165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.779196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.779535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.779565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.779918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.779948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.780299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.780328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.780695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.780725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.781097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.781127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.781498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.781528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.781903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.781932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.782111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.782141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.782529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.782558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.782937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.782966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.783338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.783367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.783733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.783762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.784130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.784162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.784546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.784575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.784952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.784988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.785352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.785380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.785601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.785630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.785966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.786003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.786259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.786290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.786448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.786475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.786868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.786897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.787311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.787341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.787696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.787724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.788076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.788106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.788355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.788387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.788602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.788632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.789021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.789058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.789448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.789477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.789680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.987 [2024-10-08 17:50:02.789709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.987 qpair failed and we were unable to recover it. 00:34:10.987 [2024-10-08 17:50:02.790099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.790130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.790497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.790525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.790877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.790905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.791299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.791331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.791674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.791703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.792118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.792148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.792491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.792519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.792794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.792823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.793055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.793084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.793466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.793495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.793870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.793900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.794067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.794100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.794328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.794358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.794598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.794627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.794863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.794892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.795289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.795319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.795685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.795714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.796097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.796126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.796540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.796569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.796785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.796814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.797180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.797211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.797391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.797423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.797787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.797816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.798197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.988 [2024-10-08 17:50:02.798227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.988 qpair failed and we were unable to recover it. 00:34:10.988 [2024-10-08 17:50:02.798603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.798632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.798894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.798922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.799317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.799348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.799706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.799735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.800105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.800136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.800512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.800541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.800901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.800929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.801176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.801206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.801497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.801525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.801893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.801921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.802156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.802186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.802533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.802561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.802920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.802948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.803325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.803360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.803725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.803756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.804127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.804157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.804549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.804578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.804934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.804963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.805395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.805425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.805780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.805808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.806181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.806210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.806574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.806603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.806910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.806938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.807331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.807361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.807622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.807650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.808003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.808033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.808395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.808425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.808784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.808814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.809160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.809191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.809531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.989 [2024-10-08 17:50:02.809559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.989 qpair failed and we were unable to recover it. 00:34:10.989 [2024-10-08 17:50:02.809793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.809821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.810219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.810249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.810490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.810518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.810704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.810731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.811113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.811142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.811500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.811529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.811886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.811914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.812300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.812329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.812593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.812622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.813004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.813034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.813384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.813415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.813686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.813715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.814049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.814079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.814334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.814362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.814711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.814739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.815096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.815126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.815471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.815499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.815866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.815894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.816271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.816301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.816541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.816573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.816800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.816831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.817165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.817195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.817556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.817584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.817942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.818000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.818404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.818433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.818791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.818819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.819176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.819205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.819586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.819615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.819969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.820008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.820374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.820404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.820760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.820790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.821161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.821191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.821587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.821616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.821876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.821904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.822259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.822289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.990 [2024-10-08 17:50:02.822667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.990 [2024-10-08 17:50:02.822696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.990 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.823056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.823085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.823484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.823513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.823871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.823899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.824266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.824297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.824654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.824683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.825030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.825059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.825417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.825446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.825807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.825835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.826567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.826600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.826971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.827011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.827351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.827379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.827776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.827805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.828243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.828272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.828629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.828657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.829059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.829090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.829447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.829475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.829829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.829858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.830203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.830233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.830595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.830623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.830994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.831023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.831393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.831422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.831769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.831799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.832046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.832076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.832445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.832473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.832814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.832842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.833225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.833255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.833513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.833541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.833907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.833942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.834323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.834353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.834711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.834738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.835085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.835114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.835467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.835496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.835851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.835879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.836242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.836271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.836629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.836661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.836896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.836924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.837319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.837350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.837792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.837820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.838265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.838295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.991 [2024-10-08 17:50:02.838650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.991 [2024-10-08 17:50:02.838678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.991 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.839041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.839070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.839471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.839501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.839924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.839954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.840197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.840227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.840447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.840478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.840841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.840871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.841246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.841278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.841631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.841661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.842027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.842058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.842442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.842470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.842812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.842841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.843152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.843182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.843571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.843599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.843983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.844013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.844373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.844403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.844766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.844795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.845154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.845184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.845542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.845570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.845932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.845960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.846373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.846404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.846762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.846790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.847157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.847187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.847540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.847569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.847933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.847962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.848307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.848337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.848695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.848723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.849083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.849112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.849483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.849519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.849879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.849908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.850275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.850305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.850673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.850702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.851051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.851080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.851519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.851547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.851773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.851804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.852187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.852217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.852582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.852611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.852967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.853004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.853359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.853388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.853759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.853787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.854032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.992 [2024-10-08 17:50:02.854065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.992 qpair failed and we were unable to recover it. 00:34:10.992 [2024-10-08 17:50:02.854423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.854452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.854836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.854866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.855219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.855249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.855613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.855642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.855906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.855934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.856325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.856355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.856730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.856759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.857125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.857155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.857518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.857547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.857945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.857981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.858335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.858364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.858729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.858758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.859117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.859147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.859511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.859540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.859908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.859937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.860298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.860328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.860690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.860718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.861095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.861124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.861476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.861504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.861769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.861797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.862222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.862252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.862589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.862625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.862994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.863024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.863391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.863420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.863638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.863669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.864029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.864059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.864405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.864434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.864783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.864818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.865156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.865188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.865551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.865579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.865945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.865988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.866354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.866381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.866742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.866770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.867029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.993 [2024-10-08 17:50:02.867058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.993 qpair failed and we were unable to recover it. 00:34:10.993 [2024-10-08 17:50:02.867402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.867430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.867793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.867821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.868195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.868224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.868560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.868588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.868964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.869000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.869289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.869319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.869545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.869575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.869988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.870019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.870361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.870390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.870739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.870768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.871126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.871156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.871520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.871549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.871907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.871936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.872298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.872328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.872611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.872641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.873013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.873044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.873405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.873434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.873803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.873832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.874279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.874310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.874676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.874704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.875086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.875116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.875457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.875488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.875865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.875894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.876235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.876265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.876636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.876664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.877028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.877059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.877434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.877462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.877829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.877858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.878209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.878239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.878610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.878640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.879000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.879029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.879392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.879421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.879772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.879801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.880167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.880203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.880556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.880585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.880827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.880855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.881197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.881227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.881585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.881614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.881992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.882021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.994 [2024-10-08 17:50:02.882292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.994 [2024-10-08 17:50:02.882321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.994 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.882688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.882716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.883093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.883123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.883480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.883509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.883763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.883794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.884172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.884202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.884563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.884591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.884958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.884994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.885363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.885393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.885758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.885787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.886153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.886184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.886547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.886576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.886940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.886968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.887340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.887369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.887739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.887768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.888130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.888159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.888531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.888560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.888928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.888956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.889297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.889327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.889681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.889710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.889962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.890014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.890402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.890432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.890799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.890827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.891201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.891231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.891566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.891595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.891953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.891992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.892389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.892417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.892702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.892731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.893077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.893107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.893479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.893507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.893857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.893886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.894242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.894272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.894634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.894663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.895029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.895058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.895419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.895459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.895807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.895835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.896202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.896231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.896533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.896560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.896951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.896987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.897325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.897353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.995 qpair failed and we were unable to recover it. 00:34:10.995 [2024-10-08 17:50:02.897715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.995 [2024-10-08 17:50:02.897743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.898112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.898143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.898483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.898512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.898888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.898916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.899293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.899323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.899690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.899719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.900088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.900119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.900467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.900495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.900855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.900884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.901261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.901291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.901636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.901665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.902061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.902091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.902478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.902507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.902851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.902878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.903232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.903262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.903633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.903663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.904036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.904064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.904426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.904454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.904711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.904742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.905098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.905128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.905475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.905503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.905729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.905761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.906121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.906152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.906517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.906546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.906900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.906928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.907287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.907316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.907678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.907707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.908074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.908105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.908471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.908500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.908856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.908885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.909267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.909296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.909660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.909689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.910047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.910077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.910445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.910473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.910842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.910877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.911134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.911163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.911522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.911551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.911912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.911941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.912371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.912401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.912642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.912673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.996 qpair failed and we were unable to recover it. 00:34:10.996 [2024-10-08 17:50:02.913049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.996 [2024-10-08 17:50:02.913079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.913427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.913457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.913693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.913721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.913958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.913994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.914347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.914375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.914735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.914764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.915131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.915160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.915531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.915560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.915870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.915899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.916236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.916266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.916635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.916664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.916921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.916949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.917298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.917326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.917674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.917703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.918056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.918087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.918445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.918472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.918885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.918913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.919167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.919197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.919550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.919578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.919960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.919997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.920431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.920461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.920799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.920828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.921209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.921239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.921597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.921626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.921873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.921901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.922253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.922285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.922638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.922666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.922946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.922981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.923234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.923265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.923623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.923651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.923997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.924027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.924434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.924463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.924820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.924850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.925216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.925247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.925616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.925651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.925890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.997 [2024-10-08 17:50:02.925919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.997 qpair failed and we were unable to recover it. 00:34:10.997 [2024-10-08 17:50:02.926151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.926184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.926559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.926589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.926947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.926984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.927341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.927372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.927737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.927766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.928145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.928174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.928534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.928563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.928920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.928947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.929292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.929321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.929684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.929712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.930147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.930178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.930539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.930567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.930937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.930966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.931336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.931364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.931712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.931740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.932110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.932139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.932578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.932606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.932958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.932996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.933368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.933397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.933671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.933700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.934077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.934108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.934504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.934532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.934772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.934800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.935151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.935181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.935538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.935568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.935823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.935852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.936196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.936225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.936608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.936636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.937083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.937112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.937484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.937513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.937854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.937884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.938135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.938167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.938524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.938560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.938910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.938939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.939307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.939336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.939709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.939736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.940071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.940101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.940473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.940501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.940882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.940910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.941270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.998 [2024-10-08 17:50:02.941300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.998 qpair failed and we were unable to recover it. 00:34:10.998 [2024-10-08 17:50:02.941545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.941576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.941935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.941964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.942239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.942270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.942619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.942648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.943023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.943054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.943430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.943458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.943794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.943824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.944205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.944234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.944591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.944620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.944956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.944993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.945350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.945379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.945739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.945768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.946126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.946156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.946530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.946558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.946912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.946941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.947302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.947331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.947692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.947720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.948135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.948164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.948525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.948552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.948927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.948956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.949291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.949321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.949690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.949718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.950015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.950044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.950400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.950429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.950680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.950708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.951070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.951106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.951426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.951454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.951793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.951821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.952195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.952225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.952562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.952590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.952999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.953029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.953403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.953430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.953796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.953825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.954184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.954215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.954481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.954510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.954887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.954915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.955162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.955191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:10.999 [2024-10-08 17:50:02.955346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.999 [2024-10-08 17:50:02.955374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:10.999 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.955724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.955755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.956123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.956153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.956511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.956540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.956763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.956791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.957131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.957167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.957538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.957567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.957909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.957939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.958274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.958304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.958663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.958691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.959062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.959092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.959520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.959548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.275 [2024-10-08 17:50:02.959905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.275 [2024-10-08 17:50:02.959933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.275 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.960300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.960329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.960687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.960716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.961091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.961120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.961481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.961509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.961853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.961883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.962219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.962249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.962586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.962615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.962971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.963010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.963374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.963402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.963660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.963688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.964047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.964076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.964439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.964469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.964833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.964861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.965241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.965270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.965633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.965661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.966039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.966075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.966409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.966439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.966814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.966842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.967112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.967142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.967500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.967528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.967887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.967915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.968144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.968176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.968546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.968575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.968946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.968982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.969335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.969363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.969715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.969743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.970019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.970050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.970306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.970335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.970668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.970697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.971039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.971070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.971427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.971456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.971807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.971837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.972241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.972270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.972628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.972656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.973014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.973057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.973447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.973476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.973843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.973871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.974278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.974307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.974571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.974599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.276 [2024-10-08 17:50:02.974951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.276 [2024-10-08 17:50:02.974990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.276 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.975340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.975369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.975807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.975835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.976237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.976269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.976429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.976457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.976861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.976890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.977243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.977273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.977599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.977627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.978006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.978035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.978289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.978317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.978686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.978715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.979078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.979107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.979490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.979519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.979787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.979814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.980202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.980231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.980610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.980639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.980994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.981029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.981396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.981425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.981768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.981797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.982156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.982186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.982545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.982574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.982981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.983012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.983371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.983400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.983775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.983803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.984141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.984171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.984535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.984563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.984771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.984801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.985192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.985221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.985582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.985611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.986019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.986049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.986246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.986274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.986640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.986668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.987029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.987058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.987392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.987421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.987785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.987814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.988036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.988065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.988422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.988459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.988805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.988833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.989072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.989101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.989471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.989500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.277 [2024-10-08 17:50:02.989866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.277 [2024-10-08 17:50:02.989894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.277 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.990245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.990274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.990616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.990645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.990908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.990936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.991194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.991225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.991508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.991537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.991883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.991912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.992252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.992282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.992646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.992675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.993017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.993046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.993423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.993451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.993811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.993840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.994167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.994196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.994550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.994582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.994937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.994966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.995323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.995353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.995715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.995749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.996110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.996140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.996509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.996537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.996895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.996923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.997276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.997307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.997670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.997701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.998073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.998103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.998370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.998398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.998786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.998814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.999135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.999173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.999531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.999560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:02.999914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:02.999944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.000196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.000228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.000510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.000539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.000896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.000924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.001313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.001343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.001717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.001746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.002117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.002148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.002490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.002519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.002892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.002920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.003305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.003335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.003786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.003815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.004150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.004179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.004542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.278 [2024-10-08 17:50:03.004571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.278 qpair failed and we were unable to recover it. 00:34:11.278 [2024-10-08 17:50:03.004926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.004955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.005311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.005340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.005717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.005747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.006109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.006139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.006499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.006528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.006775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.006803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.007212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.007241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.007610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.007638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.008011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.008040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.008417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.008447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.008806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.008835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.009209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.009237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.009581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.009609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.009999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.010030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.010347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.010377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.010760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.010789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.011027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.011063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.011431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.011459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.011808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.011839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.012207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.012237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.012490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.012518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.012805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.012833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.013203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.013232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.013478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.013511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.013867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.013897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.014261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.014290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.014735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.014764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.015016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.015045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.015417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.015444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.015810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.015839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.016232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.016262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.016513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.016540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.016914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.016943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.017328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.017359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.017605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.017634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.018012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.018042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.018406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.018436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.279 [2024-10-08 17:50:03.018646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.279 [2024-10-08 17:50:03.018675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.279 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.018908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.018937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.019411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.019444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.019682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.019711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.020066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.020097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.020484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.020512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.020879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.020908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.021276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.021306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.021563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.021591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.021830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.021858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.022271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.022300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.022636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.022665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.023055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.023085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.023433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.023463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.023727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.023759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.024090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.024120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.024502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.024531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.024899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.024929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.025278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.025308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.025690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.025725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.025962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.026014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.026387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.026416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.026807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.026837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.027214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.027244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.027607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.027636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.028001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.028032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.028397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.028426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.028808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.028837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.029227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.029258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.029634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.029664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.030067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.030097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.030453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.030484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.030824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.030852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.031121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.031151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.031507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.031536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.031767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.280 [2024-10-08 17:50:03.031795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.280 qpair failed and we were unable to recover it. 00:34:11.280 [2024-10-08 17:50:03.032161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.032191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.032524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.032552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.032933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.032962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.033293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.033322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.033674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.033703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.034049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.034081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.034441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.034470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.034577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.034605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.035011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.035043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.035404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.035432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.035887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.035917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.036195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.036224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.036464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.036492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.036746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.036778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.037219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.037250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.037615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.037645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.037867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.037895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.038280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.038310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.038658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.038695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.038838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.038876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.039147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.039178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.039424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.039453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.039825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.039854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.040275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.040312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.040551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.040583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.040939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.040969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.041230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.041263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.041496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.041525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.041662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.041690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.041924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.041954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.042306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.042337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.042698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.042728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.043102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.043133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.043509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.043539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.043914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.043944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.044336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.044367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.044735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.044765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.045145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.045176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.045300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.045331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.045857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.281 [2024-10-08 17:50:03.046000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.281 qpair failed and we were unable to recover it. 00:34:11.281 [2024-10-08 17:50:03.046516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.046555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.046941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.046973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.047418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.047450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.047815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.047845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.048209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.048241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.048465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.048494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.048872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.048901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.049263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.049295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.049499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.049534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.049905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.049935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.050306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.050340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.050576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.050608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.051001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.051032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.051391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.051422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.051855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.051885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.052246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.052277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.052638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.052668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.053020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.053052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.053466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.053496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.053869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.053899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.054276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.054306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.054665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.054695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.055058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.055087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.055444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.055481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.055723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.055753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.056188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.056219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.056446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.056475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.056846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.056878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.057279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.057308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.057670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.057699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.058058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.058089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.058466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.058496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.058730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.058759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.059101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.059133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.059504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.059535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.059769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.059798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.060157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.060188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.060439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.060468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.060709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.060740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.061110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.282 [2024-10-08 17:50:03.061140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.282 qpair failed and we were unable to recover it. 00:34:11.282 [2024-10-08 17:50:03.061366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.061401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.061789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.061819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.062094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.062126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.062502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.062531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.062903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.062933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.063240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.063270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.063611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.063640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.063995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.064026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.064384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.064413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.064651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.064685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.065070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.065102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.065463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.065492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.065867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.065895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.066257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.066287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.066549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.066578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.066816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.066848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.067216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.067248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.067494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.067527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.067909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.067938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.068189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.068222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.068576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.068607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.068949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.068987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.069367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.069397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.069763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.069800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.070152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.070184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.070534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.070564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.070930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.070958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.071201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.071231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.071499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.071528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.071750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.071783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.072140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.072171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.072532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.072561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.072925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.072954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.073303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.073332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.073671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.073701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.074086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.074118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.074465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.074501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.074845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.074873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.075224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.075254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.075609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.075637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.283 qpair failed and we were unable to recover it. 00:34:11.283 [2024-10-08 17:50:03.076001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.283 [2024-10-08 17:50:03.076033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.076380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.076409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.076779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.076807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.077124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.077153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.077516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.077545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.077909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.077938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.078303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.078333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.078694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.078724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.079097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.079128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.079389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.079418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.079772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.079802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.080045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.080075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.080453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.080482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.080832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.080861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.081233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.081263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.081687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.081716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.082056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.082087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.082512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.082540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.082871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.082900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.083281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.083311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.083673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.083701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.083942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.083970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.084333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.084363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.084716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.084751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.085094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.085124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.085395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.085423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.085655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.085684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.086038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.086069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.086424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.086452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.086819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.086847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.087221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.087252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.087663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.087693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.088084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.088114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.088353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.088381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.284 [2024-10-08 17:50:03.088733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.284 [2024-10-08 17:50:03.088762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.284 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.089109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.089140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.089490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.089518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.089778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.089810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.090206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.090237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.090596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.090625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.090960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.090998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.091381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.091409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.091776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.091805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.092184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.092213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.092570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.092599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.092968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.093007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.093424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.093453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.093811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.093840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.094204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.094235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.094580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.094609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.094998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.095030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.095386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.095415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.095778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.095808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.096163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.096194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.096554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.096584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.096946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.096984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.097313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.097342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.097705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.097736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.098097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.098128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.098490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.098518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.098875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.098905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.099258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.099288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.099650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.099679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.100044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.100081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.100439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.100468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.100771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.100799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.101143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.101173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.101544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.101572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.101919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.101947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.102347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.102378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.102735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.102764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.103020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.103049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.103241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.103274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.103639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.103668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.285 [2024-10-08 17:50:03.104030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.285 [2024-10-08 17:50:03.104061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.285 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.104431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.104460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.104704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.104732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.105068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.105098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.105466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.105495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.105748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.105776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.106131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.106160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.106522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.106549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.106911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.106940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.107303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.107333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.107597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.107626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.107996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.108027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.108257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.108288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.108651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.108681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.109039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.109069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.109431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.109461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.109830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.109860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.110223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.110253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.110598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.110627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.110873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.110901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.111272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.111303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.111673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.111703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.112078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.112109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.112470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.112500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.112864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.112892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.113264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.113294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.113579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.113607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.113849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.113877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.114238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.114267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.114605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.114640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.114993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.115023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.115423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.115451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.115811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.115841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.116218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.116247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.116604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.116633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.117001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.117031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.117410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.117439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.117804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.117834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.118209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.118239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.118601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.118630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.286 [2024-10-08 17:50:03.118917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.286 [2024-10-08 17:50:03.118945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.286 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.119299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.119329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.119688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.119717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.120078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.120108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.120467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.120495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.120844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.120872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.121233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.121263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.121613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.121642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.121997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.122028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.122404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.122433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.122800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.122829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.123199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.123228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.123574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.123603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.123962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.124000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.124239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.124270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.124623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.124653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.125007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.125039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.125449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.125477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.125838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.125867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.126215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.126245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.126490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.126518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.126872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.126901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.127239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.127269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.127514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.127546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.127781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.127811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.128066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.128099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.128444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.128472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.128805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.128834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.129205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.129236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.129490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.129519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.129879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.129909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.130264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.130293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.130649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.130679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.131051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.131082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.131459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.131487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.131861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.131889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.132233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.132264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.132618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.132647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.133007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.133037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.133398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.133427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.133790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.287 [2024-10-08 17:50:03.133819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.287 qpair failed and we were unable to recover it. 00:34:11.287 [2024-10-08 17:50:03.134086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.134116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.134343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.134375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.134711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.134741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.135105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.135136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.135506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.135535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.135892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.135922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.136265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.136295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.136535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.136567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.136793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.136825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.137183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.137213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.137589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.137617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.137997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.138027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.138366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.138395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.138762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.138791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.139157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.139186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.139550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.139586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.139913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.139942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.140317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.140347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.140659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.140695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.141028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.141058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.141328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.141356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.141758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.141787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.142136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.142168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.142566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.142595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.143001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.143031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.143389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.143417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.143804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.143833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.144193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.144223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.144584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.144612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.144938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.144967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.145338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.145367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.145619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.145648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.145997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.146028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.146383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.146420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.146661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.146690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.288 qpair failed and we were unable to recover it. 00:34:11.288 [2024-10-08 17:50:03.147056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.288 [2024-10-08 17:50:03.147086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.147452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.147479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.147836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.147865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.148210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.148241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.148603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.148631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.149004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.149034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.149378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.149406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.149762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.149791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.150042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.150076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.150438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.150467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.150773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.150802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.151152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.151183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.151553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.151582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.151937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.151965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.152315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.152344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.152739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.152768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.153143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.153172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.153554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.153583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.153887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.153916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.154252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.154282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.154640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.154676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.155036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.155067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.155418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.155446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.155694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.155723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.156080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.156109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.156476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.156505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.156734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.156764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.157028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.157058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.157424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.157453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.157833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.157862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.158215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.158246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.158616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.158644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.158999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.159030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.159284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.159313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.159692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.159721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.160079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.160108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.160461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.160490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.160855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.160883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.161231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.161261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.161629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.161658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.289 [2024-10-08 17:50:03.162032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.289 [2024-10-08 17:50:03.162061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.289 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.162422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.162450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.162817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.162845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.163199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.163230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.163567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.163597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.163958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.163997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.164271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.164300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.164662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.164691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.165051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.165081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.165304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.165334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.165695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.165724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.165964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.166015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.166385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.166414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.166774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.166802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.167164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.167194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.167558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.167587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.167957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.167993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.168354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.168382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.168804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.168833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.169078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.169110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.169495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.169531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.169872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.169908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.170299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.170329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.170686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.170715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.170956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.170999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.171327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.171356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.171718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.171749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.172122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.172154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.172495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.172525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.172877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.172905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.173345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.173374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.173625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.173656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.174037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.174067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.174378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.174406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.174769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.174798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.175160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.175190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.175530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.175559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.175921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.175950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.176303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.176331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.176688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.176717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.290 [2024-10-08 17:50:03.177070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.290 [2024-10-08 17:50:03.177100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.290 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.177437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.177466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.177832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.177862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.178127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.178157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.178394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.178425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.178788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.178817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.179188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.179219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.179556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.179585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.179922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.179951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.180326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.180357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.180698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.180727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.181086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.181137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.181494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.181524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.181894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.181924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.182305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.182335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.182699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.182728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.183095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.183126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.183468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.183498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.183853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.183881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.184251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.184281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.184648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.184682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.184983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.185013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.185271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.185304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.185610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.185638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.186005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.186035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.186397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.186425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.186787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.186815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.187163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.187194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.187533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.187562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.187824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.187852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.188294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.188324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.188678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.188707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.188943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.188985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.189321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.189350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.189692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.189729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.190096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.190127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.190359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.190390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.190741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.190770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.191144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.191174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.191584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.191612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.191944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.291 [2024-10-08 17:50:03.191972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.291 qpair failed and we were unable to recover it. 00:34:11.291 [2024-10-08 17:50:03.192365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.192393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.192770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.192798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.193165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.193194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.193548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.193576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.193925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.193955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.194323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.194353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.194722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.194752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.195114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.195144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.195459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.195489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.195862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.195891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.196122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.196155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.196528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.196558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.196909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.196938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.197301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.197330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.197702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.197730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.198107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.198139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.198536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.198911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.198939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.199297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.199328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.199689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.199725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.200094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.200124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.200482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.200510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.200876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.200904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.201143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.201173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.201539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.201567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.201909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.201937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.202354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.202384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.202742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.202770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.203129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.203159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.203516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.203544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.203906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.203933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.204246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.204276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.204635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.204665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.205028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.205058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.205353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.205381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.205758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.205786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.206140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.206170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.206568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.206596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.206818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.206846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.292 [2024-10-08 17:50:03.207240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.292 [2024-10-08 17:50:03.207269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.292 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.207636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.207665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.208005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.208034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.208271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.208299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.208561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.208589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.208915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.208943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.209310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.209340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.209704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.209733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.210173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.210203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.210555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.210583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.210932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.210960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.211412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.211441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.211792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.211820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.212239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.212270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.212617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.212645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.212889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.212922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.213197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.213229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.213609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.213637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.213878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.213910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.214286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.214317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.214677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.214713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.215054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.215084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.215454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.215482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.215852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.215880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.216223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.216252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.216605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.216634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.216997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.217026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.217389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.217417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.217661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.217693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.218028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.218058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.218404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.218433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.218792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.218820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.219181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.219210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.219560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.219588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.219959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.220004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.220358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.220386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.293 qpair failed and we were unable to recover it. 00:34:11.293 [2024-10-08 17:50:03.220742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.293 [2024-10-08 17:50:03.220771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.220995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.221024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.221383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.221412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.221783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.221812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.222184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.222214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.222572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.222600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.222963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.223000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.223234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.223265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.223632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.223661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.224024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.224056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.224345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.224374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.224605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.224634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.225012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.225042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.225274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.225302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.225662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.225691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.226056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.226086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.226462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.226490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.226857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.226885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.227254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.227284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.227663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.227692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.228055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.228084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.228450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.228478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.228783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.228811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.229163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.229193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.229574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.229609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.229827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.229858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.230231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.230262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.230625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.230654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.231014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.231043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.231408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.231437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.231794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.231823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.232169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.232199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.232561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.232593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.232940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.232969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.233339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.233369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.233740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.233769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.234138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.234168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.234433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.234461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.234852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.234882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.235296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.235333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.294 [2024-10-08 17:50:03.235695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.294 [2024-10-08 17:50:03.235723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.294 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.236091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.236120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.236532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.236560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.236917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.236945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.237320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.237349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.237694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.237722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.238087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.238118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.238484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.238512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.238882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.238910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.239290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.239319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.239674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.239703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.240054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.240086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.240432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.240460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.240835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.240864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.241207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.241238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.241618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.241646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.242085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.242115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.242446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.242474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.242710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.242739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.243022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.243052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.243419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.243447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.243804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.243833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.244205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.244236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.244635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.244665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.245021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.245057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.245314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.245343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.245699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.245730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.246084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.246115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.246482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.246511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.246873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.246903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.247266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.247297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.247661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.247690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.248061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.248092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.248456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.248486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.248836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.248865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.249023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.249053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.249387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.249417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.249762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.249792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.250152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.250182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.250552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.250581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.295 qpair failed and we were unable to recover it. 00:34:11.295 [2024-10-08 17:50:03.250954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.295 [2024-10-08 17:50:03.250992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.251328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.251357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.251714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.251743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.251991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.252022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.252397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.252427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.252658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.252691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.296 [2024-10-08 17:50:03.253061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.296 [2024-10-08 17:50:03.253093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.296 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.253452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.253485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.253851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.253881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.254225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.254257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.254632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.254663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.254963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.255006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.255245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.255275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.255652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.255682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.255913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.255944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.256208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.256239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.256668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.256698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.257035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.257066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.570 [2024-10-08 17:50:03.257426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.570 [2024-10-08 17:50:03.257456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.570 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.257817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.257847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.258135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.258165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.258533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.258563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.258930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.258960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.259389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.259421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.259648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.259686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.259849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.259879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.260120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.260151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.260366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.260393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.260756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.260785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.261189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.261219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.261581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.261609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.261997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.262027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.262355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.262385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.262744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.262773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.263141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.263171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.263423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.263452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.263830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.263859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.264209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.264239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.264534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.264562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.264915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.264943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.265348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.265378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.265603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.265631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.266013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.266043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.266284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.266315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.266678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.266707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.267085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.267116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.267375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.267405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.267761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.267789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.268191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.268222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.268595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.268624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.268988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.269017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.269281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.269311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.269539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.269570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.269930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.269960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.270309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.270339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.270707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.270735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.271004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.271034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.271430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.271460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.571 [2024-10-08 17:50:03.271820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.571 [2024-10-08 17:50:03.271849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.571 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.272118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.272148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.272527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.272556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.272919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.272948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.273317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.273348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.273728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.273757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.274125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.274162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.274463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.274492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.274849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.274879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.275220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.275251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.275590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.275620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.275985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.276016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.276440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.276469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.276823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.276852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.277197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.277228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.277572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.277601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.277989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.278019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.278409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.278439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.278735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.278763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.279145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.279176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.279417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.279450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.279696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.279725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.279969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.280016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.280393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.280422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.280802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.280831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.281156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.281186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.281652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.281680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.281913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.281941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.282315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.282346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.282574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.282603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.282969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.283007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.283344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.283373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.283735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.283764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.284136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.284167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.284410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.284438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.284778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.284806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.572 [2024-10-08 17:50:03.285196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.572 [2024-10-08 17:50:03.285226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.572 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.285471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.285503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.285855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.285886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.286114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.286143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.286548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.286576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.286934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.286963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.287332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.287362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.287786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.287815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.288198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.288231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.288566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.288595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.288961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.289007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.289359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.289387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.289751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.289780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.290133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.290164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.290533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.290562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.290881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.290909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.291268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.291298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.291674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.291702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.292101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.292131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.292479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.292507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.292923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.292952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.293341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.293372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.293749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.293778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.294149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.294179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.294506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.294544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.294939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.294967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.295358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.295387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.295753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.573 [2024-10-08 17:50:03.295782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.573 qpair failed and we were unable to recover it. 00:34:11.573 [2024-10-08 17:50:03.296046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.296077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.296394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.296423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.296789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.296817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.297193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.297223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.297578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.297606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.297958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.297996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.298217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.298248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.298540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.298570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.298920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.298948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.299228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.299259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.299510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.299539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.299785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.299822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.300154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.300184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.300557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.300586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.300836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.300863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.301281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.301312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.301566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.301598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.301753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.301783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.302153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.302183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.302546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.302575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.302943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.302971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.303367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.574 [2024-10-08 17:50:03.303396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.574 qpair failed and we were unable to recover it. 00:34:11.574 [2024-10-08 17:50:03.303770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.303806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.304161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.304192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.304564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.304594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.304931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.304961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.305302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.305331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.305705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.305734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.305998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.306028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.306273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.306301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.306574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.306602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.306953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.306991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.307345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.307374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.307751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.307779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.308047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.308077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.308304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.308335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.308673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.308702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.309064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.309094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.309498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.309526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.309947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.309984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.310351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.310380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.310619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.310646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.575 qpair failed and we were unable to recover it. 00:34:11.575 [2024-10-08 17:50:03.311035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.575 [2024-10-08 17:50:03.311064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.311436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.311465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.311839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.311868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.312022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.312058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.312292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.312323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.312692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.312721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.313099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.313129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.313408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.313438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.313651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.313681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.314055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.314085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.314522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.314551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.314915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.314944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.315228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.576 [2024-10-08 17:50:03.315258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.576 qpair failed and we were unable to recover it. 00:34:11.576 [2024-10-08 17:50:03.315619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.315648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.316033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.316064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.316427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.316457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.316811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.316841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.317204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.317234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.317602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.317631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.318004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.318033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.318395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.318430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.318654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.318685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.319056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.319085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.319436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.319466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.319696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.319724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.320078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.320109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.320489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.320517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.320875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.320904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.321270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.321300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.321671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.321700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.321937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.321967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.322338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.322367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.577 qpair failed and we were unable to recover it. 00:34:11.577 [2024-10-08 17:50:03.322730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.577 [2024-10-08 17:50:03.322760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.323027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.323058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.323447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.323475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.323840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.323868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.324129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.324163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.324532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.324561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.324901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.324930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.325293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.325323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.325672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.325700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.326046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.326077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.326431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.326460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.326841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.326869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.327214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.327243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.327479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.327511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.327943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.327971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.328349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.328383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.328720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.328749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.329008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.329038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.329401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.329430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.578 qpair failed and we were unable to recover it. 00:34:11.578 [2024-10-08 17:50:03.329794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.578 [2024-10-08 17:50:03.329822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.330242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.330272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.330599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.330628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.330888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.330916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.331294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.331324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.331697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.331727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.332096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.332127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.332493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.332521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.332731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.332761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.333124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.333154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.333370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.333401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.333773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.333802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.334138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.334168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.334526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.334554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.334953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.334990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.335338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.335368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.335696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.335724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.336110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.579 [2024-10-08 17:50:03.336140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.579 qpair failed and we were unable to recover it. 00:34:11.579 [2024-10-08 17:50:03.336497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.336525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.336908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.336936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.337297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.337327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.337692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.337721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.338079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.338109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.338363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.338395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.338647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.338676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.339067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.339096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.339466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.339495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.339854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.339883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.340228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.340257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.340628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.340657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.341019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.341048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.341298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.341325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.341693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.341721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.342101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.342132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.342560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.580 [2024-10-08 17:50:03.342588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.580 qpair failed and we were unable to recover it. 00:34:11.580 [2024-10-08 17:50:03.342918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.342947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.343375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.343417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.343801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.343829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.344169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.344199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.344612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.344640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.344898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.344926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.345190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.345221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.345464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.345496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.345860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.345889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.346247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.346278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.346624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.346652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.347075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.347104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.347464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.347494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.347856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.347886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.348259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.348289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.348659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.348688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.349049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.349081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.581 qpair failed and we were unable to recover it. 00:34:11.581 [2024-10-08 17:50:03.349371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.581 [2024-10-08 17:50:03.349399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.349789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.349817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.350098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.350127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.350504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.350533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.350893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.350921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.351286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.351316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.351696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.351725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.352101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.352132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.352499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.352528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.352735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.352766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.353119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.353149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.353507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.353536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.353904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.353931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.354340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.354370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.354710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.354739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.355091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.355122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.582 [2024-10-08 17:50:03.355487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.582 [2024-10-08 17:50:03.355516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.582 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.355894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.355923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.356295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.356325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.356689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.356719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.357081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.357110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.357457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.357486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.357826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.357855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.358202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.358232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.358594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.358628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.358968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.359008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.359239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.359270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.359652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.359680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.360041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.360071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.360427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.360455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.360823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.360851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.361077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.583 [2024-10-08 17:50:03.361109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.583 qpair failed and we were unable to recover it. 00:34:11.583 [2024-10-08 17:50:03.361489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.361518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.361891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.361919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.362361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.362391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.362746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.362774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.363127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.363158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.363520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.363549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.363909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.363937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.364296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.364326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.364691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.364719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.365061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.365091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.365452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.365482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.365843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.365871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.366161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.366191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.366637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.366665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.367023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.367053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.584 [2024-10-08 17:50:03.367424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.584 [2024-10-08 17:50:03.367452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.584 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.367854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.367881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.368231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.368262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.368629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.368657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.369024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.369054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.369272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.369304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.369669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.369697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.370090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.370120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.370477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.370504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.370867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.370895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.371155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.371185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.371431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.371464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.371812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.371841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.372210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.372240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.372608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.372636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.585 qpair failed and we were unable to recover it. 00:34:11.585 [2024-10-08 17:50:03.372999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.585 [2024-10-08 17:50:03.373028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.373391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.373420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.373780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.373815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.374185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.374216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.374585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.374614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.374777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.374805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.375080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.375109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.375529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.375557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.375920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.375948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.376216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.376245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.376610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.376638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.376992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.377022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.377373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.377401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.377766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.377794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.378097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.378126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.586 qpair failed and we were unable to recover it. 00:34:11.586 [2024-10-08 17:50:03.378475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.586 [2024-10-08 17:50:03.378504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.378810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.378839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.379205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.379235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.379524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.379552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.379919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.379948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.380245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.380273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.380634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.380662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.380911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.380943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.381320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.381350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.381575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.381606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.381985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.587 [2024-10-08 17:50:03.382016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.587 qpair failed and we were unable to recover it. 00:34:11.587 [2024-10-08 17:50:03.382389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.382417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.382778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.382807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.383143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.383173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.383549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.383578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.384013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.384044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.384434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.384462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.384821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.384849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.385237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.385266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.385638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.385666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.386030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.386059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.386415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.386443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.386801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.386829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.387293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.387323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.387676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.387706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.388044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.388074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.588 qpair failed and we were unable to recover it. 00:34:11.588 [2024-10-08 17:50:03.388427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.588 [2024-10-08 17:50:03.388456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.388814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.388849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.389267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.389297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.389639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.389667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.390034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.390063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.390324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.390352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.390741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.390768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.391178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.391208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.391534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.391562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.391925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.391953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.392289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.392319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.392596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.392625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.392871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.392899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.393135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.393164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.393539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.393568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.393940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.393969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.394336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.589 [2024-10-08 17:50:03.394365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.589 qpair failed and we were unable to recover it. 00:34:11.589 [2024-10-08 17:50:03.394732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.394760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.395113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.395143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.395530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.395559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.395917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.395946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.396319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.396349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.396722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.396750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.397119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.397148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.397502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.397530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.397771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.397802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.398151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.398181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.398459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.398487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.398858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.398887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.399233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.399264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.399635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.399663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.400043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.400072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.590 qpair failed and we were unable to recover it. 00:34:11.590 [2024-10-08 17:50:03.400436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.590 [2024-10-08 17:50:03.400464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.400840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.400868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.401225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.401256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.401605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.401634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.402004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.402034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.402398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.402426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.402782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.402810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.403149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.403187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.403558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.403586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.403944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.404007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.404388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.404418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.404778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.404806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.405176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.405206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.405567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.405596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.405896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.591 [2024-10-08 17:50:03.405925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.591 qpair failed and we were unable to recover it. 00:34:11.591 [2024-10-08 17:50:03.406257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.406287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.406645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.406674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.407042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.407072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.407420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.407448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.407699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.407731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.407947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.407994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.408350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.408379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.408634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.408662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.409015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.409046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.409422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.409450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.409820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.409848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.410212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.410242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.410648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.410677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.411022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.411052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.411437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.411467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.592 qpair failed and we were unable to recover it. 00:34:11.592 [2024-10-08 17:50:03.411831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.592 [2024-10-08 17:50:03.411860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.412038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.412068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.412475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.412504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.412879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.412907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.413294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.413324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.413582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.413610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.413965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.414003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.414377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.414405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.414773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.414801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.415157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.415187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.415437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.415468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.415821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.415851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.416218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.416248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.416603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.416632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.416997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.417027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.417380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.417408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.593 qpair failed and we were unable to recover it. 00:34:11.593 [2024-10-08 17:50:03.417787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.593 [2024-10-08 17:50:03.417816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.418087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.418117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.418465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.418494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.418863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.418903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.419332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.419363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.419703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.419731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.420099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.420129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.420501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.420529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.420932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.420960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.421327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.421357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.421521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.421553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.421917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.421947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.422318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.422349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.422721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.594 [2024-10-08 17:50:03.422749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.594 qpair failed and we were unable to recover it. 00:34:11.594 [2024-10-08 17:50:03.423092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.423123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.423367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.423396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.423648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.423678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.424043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.424074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.424322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.424350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.424703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.424734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.425086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.425116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.425468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.425497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.425852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.425881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.426220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.426251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.426659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.426688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.595 [2024-10-08 17:50:03.427020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.595 [2024-10-08 17:50:03.427051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.595 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.427415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.427444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.427805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.427833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.428213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.428243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.428586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.428614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.428966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.429006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.429287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.429316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.429755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.429784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.430116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.430147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.430482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.430511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.430872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.430900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.431257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.431287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.431652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.431681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.432040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.432069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.432395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.596 [2024-10-08 17:50:03.432424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.596 qpair failed and we were unable to recover it. 00:34:11.596 [2024-10-08 17:50:03.432797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.432826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.433190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.433220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.433589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.433618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.433860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.433898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.434247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.434277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.434635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.434665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.435031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.435061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.435421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.435449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.435810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.435838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.436210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.436239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.436578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.436607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.436970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.437008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.437420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.437449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.437804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.597 [2024-10-08 17:50:03.437832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.597 qpair failed and we were unable to recover it. 00:34:11.597 [2024-10-08 17:50:03.438209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.438239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.438604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.438632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.438993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.439022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.439379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.439407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.439769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.439798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.440185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.440215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.440548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.440577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.440815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.440848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.441204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.441235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.441640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.441668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.442032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.598 [2024-10-08 17:50:03.442062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.598 qpair failed and we were unable to recover it. 00:34:11.598 [2024-10-08 17:50:03.442282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.442313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.442668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.442696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.443109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.443140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.443530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.443559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.443916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.443945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.444309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.444340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.444704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.444733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.445088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.445118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.445475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.445504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.445868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.445897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.599 [2024-10-08 17:50:03.446261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.599 [2024-10-08 17:50:03.446291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.599 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.446552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.446581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.446941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.446971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.447276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.447305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.447665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.447694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.447957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.447994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.448381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.448410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.448767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.448796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.449163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.449198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.449559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.449588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.449955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.449993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.450345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.450372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.450717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.450746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.451112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.451142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.451386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.600 [2024-10-08 17:50:03.451417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.600 qpair failed and we were unable to recover it. 00:34:11.600 [2024-10-08 17:50:03.451780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.451809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.452162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.452193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.452550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.452578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.452944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.452972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.453339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.453367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.453716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.453744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.454107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.454138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.454509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.454538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.454949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.454986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.455338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.455368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.455615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.455644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.455997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.456027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.456387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.456417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.456784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.601 [2024-10-08 17:50:03.456813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.601 qpair failed and we were unable to recover it. 00:34:11.601 [2024-10-08 17:50:03.457240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.457271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.457641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.457669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.458017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.458046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.458407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.458435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.458789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.458817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.459167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.459197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.459552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.459581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.459942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.459969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.460332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.460361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.460728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.460757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.461117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.461146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.461503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.461532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.602 qpair failed and we were unable to recover it. 00:34:11.602 [2024-10-08 17:50:03.461888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.602 [2024-10-08 17:50:03.461917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.462216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.462255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.462578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.462607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.462982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.463012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.463373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.463403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.463765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.463795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.464159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.464189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.464542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.464576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.464998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.465029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.465395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.465423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.465788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.465817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.466163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.466193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.466554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.466582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.466921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.466949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.603 qpair failed and we were unable to recover it. 00:34:11.603 [2024-10-08 17:50:03.467310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.603 [2024-10-08 17:50:03.467340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.467671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.467700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.468008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.468038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.468406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.468435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.468783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.468811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.469244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.469274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.469603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.469632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.469997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.470027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.470275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.470307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.470672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.470701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.471112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.471142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.471498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.471528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.471887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.471916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.472253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.604 [2024-10-08 17:50:03.472282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.604 qpair failed and we were unable to recover it. 00:34:11.604 [2024-10-08 17:50:03.472645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.472675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.472934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.472962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.473385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.473417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.473775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.473804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.474023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.474055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.474413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.474441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.474783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.474812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.475179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.475209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.475435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.475464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.475851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.475880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.476225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.476256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.476621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.476649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.477020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.477049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.605 [2024-10-08 17:50:03.477407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.605 [2024-10-08 17:50:03.477435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.605 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.477799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.477827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.478198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.478229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.478612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.478640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.478998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.479027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.479385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.479415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.479768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.479802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.480140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.480170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.480536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.480565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.480922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.480952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.481328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.481358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.481711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.481740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.482099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.482129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.606 [2024-10-08 17:50:03.482484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.606 [2024-10-08 17:50:03.482513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.606 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.482949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.482985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.483203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.483235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.483614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.483644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.484008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.484038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.484401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.484429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.484784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.484813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.485161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.485192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.485556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.485584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.485951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.485988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.486371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.486401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.486753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.486781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.487144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.487173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.487610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.607 [2024-10-08 17:50:03.487639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.607 qpair failed and we were unable to recover it. 00:34:11.607 [2024-10-08 17:50:03.487987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.488017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.488379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.488408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.488769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.488798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.489157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.489187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.489437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.489465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.489820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.489849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.490099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.490129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.490467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.490495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.490862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.490890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.491324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.491355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.491603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.491635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.491986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.492026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.492359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.608 [2024-10-08 17:50:03.492389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.608 qpair failed and we were unable to recover it. 00:34:11.608 [2024-10-08 17:50:03.492803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.492831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.493232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.493263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.493620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.493649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.494014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.494045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.494405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.494433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.494817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.494846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.495071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.495112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.495502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.495531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.495894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.495923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.496361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.496391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.496831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.496859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.497212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.497242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.497605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.497634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.497883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.609 [2024-10-08 17:50:03.497913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.609 qpair failed and we were unable to recover it. 00:34:11.609 [2024-10-08 17:50:03.498270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.498300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.498668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.498697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.499056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.499086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.499475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.499504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.499870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.499898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.500252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.500281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.500677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.500706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.500963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.501004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.501439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.501467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.501885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.501915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.502303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.502333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.502681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.502710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.503081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.503112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.503495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.503524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.610 [2024-10-08 17:50:03.503766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.610 [2024-10-08 17:50:03.503794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.610 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.504151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.504181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.504418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.504449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.504862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.504890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.505259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.505288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.505637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.505667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.506008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.506039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.506402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.506431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.611 [2024-10-08 17:50:03.506802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.611 [2024-10-08 17:50:03.506831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.611 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.507160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.507189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.507440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.507470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.507855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.507883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.508274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.508305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.508569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.508598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.508947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.509001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.509385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.509414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.509784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.509813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.612 [2024-10-08 17:50:03.510158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.612 [2024-10-08 17:50:03.510187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.612 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.510654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.510689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.510916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.510945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.511305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.511336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.511599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.511627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.511922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.511951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.512300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.512331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.512709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.512737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.513002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.513032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.513295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.513329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.513712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.513741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.514102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.514132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.514498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.514528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.514889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.514919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.515273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.515303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.515560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.613 [2024-10-08 17:50:03.515589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.613 qpair failed and we were unable to recover it. 00:34:11.613 [2024-10-08 17:50:03.516030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.516060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.516434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.516462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.516847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.516876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.517092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.517125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.517497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.517527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.517901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.517930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.518310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.518341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.518705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.518733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.518989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.519020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.519409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.519437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.519836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.519866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.520212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.520243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.520613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.520648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.521023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.614 [2024-10-08 17:50:03.521053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.614 qpair failed and we were unable to recover it. 00:34:11.614 [2024-10-08 17:50:03.521381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.521419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.521766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.521795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.522135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.522167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.522558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.522587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.522964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.523003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.523361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.523389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.523652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.523680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.523924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.523956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.524353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.524383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.524733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.524763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.525122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.525152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.525522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.525551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.525964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.526005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.526250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.526279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.615 [2024-10-08 17:50:03.526650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.615 [2024-10-08 17:50:03.526680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.615 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.527049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.527079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.527489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.527518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.527755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.527784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.528190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.528219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.528581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.528609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.528891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.528919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.529264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.529294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.529693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.529721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.530109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.530140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.530425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.530453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.530631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.530659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.531050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.531081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.531340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.531368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.531713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.531743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.532156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.532187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.616 [2024-10-08 17:50:03.532539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.616 [2024-10-08 17:50:03.532568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.616 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.532801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.532833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.533209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.533240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.533474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.533506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.533789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.533817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.534158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.534188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.534569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.534598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.535054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.535084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.535468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.535503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.535924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.535953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.536184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.536218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.536669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.536698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.536946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.536983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.537221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.537251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.537629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.537659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.538102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.617 [2024-10-08 17:50:03.538133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.617 qpair failed and we were unable to recover it. 00:34:11.617 [2024-10-08 17:50:03.538485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.538522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.538871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.538899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.539158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.539190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.539561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.539592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.539822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.539850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.540127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.540157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.540536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.540565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.540929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.540958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.541252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.541281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.541544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.541573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.541962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.541999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.542367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.542396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.542810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.542840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.543202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.543231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.543569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.543598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.618 [2024-10-08 17:50:03.543957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.618 [2024-10-08 17:50:03.543997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.618 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.544380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.544408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.544795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.544822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.545212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.545243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.545604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.545633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.546013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.546044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.546282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.546315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.546682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.546710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.547074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.547105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.547547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.547576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.547821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.547849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.548218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.548247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.548617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.548646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.548892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.548920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.619 [2024-10-08 17:50:03.549296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.619 [2024-10-08 17:50:03.549326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.619 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.549685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.549716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.549965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.550005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.550224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.550259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.550633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.550662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.551045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.551075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.551496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.551525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.551900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.551928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.552195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.552225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.552617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.552647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.553003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.553033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.553387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.553416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.553794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.553823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.554098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.554128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.554489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.554518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.554899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.554928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.555379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.555410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.555777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.555808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.556021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.556052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.556411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.556440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.556797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.556825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.557212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.557242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.557523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.557554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.557731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.557759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.558122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.558152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.558382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.558410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.558796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.558824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.559166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.559195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.559575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.559604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.559992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.560024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.560375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.560404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.899 qpair failed and we were unable to recover it. 00:34:11.899 [2024-10-08 17:50:03.560770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.899 [2024-10-08 17:50:03.560798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.561163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.561192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.561403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.561434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.561780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.561809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.562063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.562093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.562324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.562352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.562717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.562746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.563095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.563126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.563506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.563534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.563900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.563928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.564372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.564402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.564663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.564694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.565040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.565077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.565423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.565452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.565789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.565818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.566089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.566119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.566487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.566516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.566878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.566906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.567259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.567289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.567541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.567570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.567913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.567943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.568327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.568358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.568736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.568765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.569036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.569066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.569447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.569475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.569837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.569865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.570240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.570270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.570631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.570659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.571045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.571074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.571455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.571484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.571852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.571881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.572217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.572247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.572606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.572635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.573005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.573035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.573408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.573437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.573775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.900 [2024-10-08 17:50:03.573803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.900 qpair failed and we were unable to recover it. 00:34:11.900 [2024-10-08 17:50:03.574167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.574197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.574562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.574590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.574957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.574993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.575261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.575291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.575661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.575691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.576127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.576157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.576515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.576544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.576760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.576789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.577144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.577174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.577550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.577579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.577951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.577988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.578350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.578378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.578753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.578782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.579050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.579079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.579496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.579524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.579884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.579913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.580279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.580313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.580526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.580555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.580894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.580923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.581285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.581315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.581563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.581592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.581945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.581984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.582342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.582372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.582737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.582767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.583148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.583179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.583548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.583577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.583828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.583860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.584225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.584256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.584624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.584652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.584868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.584899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.585269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.585300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.585651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.585681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.586033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.586063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.586298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.586326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.586688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.586716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.587134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.587163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.587534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.587563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.587826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.587855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.588099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.901 [2024-10-08 17:50:03.588128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.901 qpair failed and we were unable to recover it. 00:34:11.901 [2024-10-08 17:50:03.588506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.588535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.588785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.588814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.589137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.589166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.589535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.589564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.589926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.589957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.590306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.590335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.590700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.590729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.591143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.591174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.591537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.591566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.591809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.591837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.592215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.592245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.592612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.592641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.592993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.593024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.593384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.593412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.593754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.593783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.594011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.594044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.594385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.594415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.594747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.594782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.595128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.595158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.595521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.595549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.595913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.595942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.596313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.596343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.596686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.596716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.597101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.597130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.597498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.597528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.597894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.597923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.598297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.598327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.598658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.598686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.599041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.599072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.599428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.599458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.599835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.599863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.600204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.600235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.600634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.600662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.600999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.601029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.601476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.601504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.601939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.601967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.602342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.602370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.602732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.602761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.603109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.603139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.902 qpair failed and we were unable to recover it. 00:34:11.902 [2024-10-08 17:50:03.603372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.902 [2024-10-08 17:50:03.603400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.603819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.603847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.604279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.604308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.604661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.604690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.605054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.605084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.605504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.605532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.605890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.605919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.606304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.606334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.606691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.606720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.606963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.607007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.607335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.607364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.607730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.607758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.608129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.608159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.608515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.608544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.608915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.608943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.609190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.609220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.609597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.609626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.609991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.610020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.610376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.610411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.610743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.610772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.611112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.611142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.611356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.611384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.611727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.611755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.612132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.612163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.612394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.612422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.612795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.612823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.613188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.613218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.613601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.613629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.614012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.614042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.614384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.614412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.614760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.614789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.615238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.615267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.615511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.615541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.903 qpair failed and we were unable to recover it. 00:34:11.903 [2024-10-08 17:50:03.615893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.903 [2024-10-08 17:50:03.615922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.616205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.616236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.616673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.616702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.617028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.617059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.617419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.617448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.617700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.617731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.618084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.618115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.618524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.618554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.618918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.618946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.619330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.619361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.619720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.619748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.620110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.620140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.620504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.620533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.620902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.620931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.621312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.621341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.621592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.621621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.621985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.622016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.622365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.622393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.622764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.622792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.623158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.623188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.623528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.623557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.623919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.623948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.624191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.624220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.624588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.624617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.624965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.625005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.625337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.625372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.625739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.625767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.626138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.626168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.626417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.626445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.626784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.626814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.627154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.627183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.627554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.627583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.627943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.627971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.628354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.628383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.628730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.628759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.629130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.629160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.629520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.629548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.629912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.629941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.630302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.630332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.630687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.630715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.904 qpair failed and we were unable to recover it. 00:34:11.904 [2024-10-08 17:50:03.631056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.904 [2024-10-08 17:50:03.631086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.631456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.631486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.631730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.631758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.632124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.632156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.632517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.632546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.632796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.632824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.633078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.633107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.633385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.633414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.633780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.633808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.634178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.634208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.634565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.634594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.634967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.635005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.635355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.635386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.635740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.635769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.636142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.636172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.636530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.636559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.636903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.636931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.637296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.637327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.637692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.637721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.638086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.638115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.638477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.638506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.638867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.638897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.639234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.639263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.639676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.639705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.640067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.640096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.640437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.640471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.640840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.640868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.641280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.641310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.641655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.641683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.641915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.641946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.642330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.642360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.642711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.642740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.643053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.643083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.643454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.643483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.643828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.643856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.643997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.644031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.644405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.644435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.644773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.644801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.645183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.645212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.645583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.645612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.905 qpair failed and we were unable to recover it. 00:34:11.905 [2024-10-08 17:50:03.645969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.905 [2024-10-08 17:50:03.646007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.646361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.646389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.646747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.646775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.647134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.647165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.647549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.647577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.647949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.647997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.648430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.648460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.648827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.648856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.649205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.649234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.649615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.649644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.650004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.650033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.650389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.650418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.650676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.650705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.650942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.650970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.651332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.651360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.651715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.651744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.652118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.652149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.652445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.652474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.652762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.652790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.653122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.653152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.653451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.653479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.653824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.653852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.654196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.654234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.654590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.654618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.654863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.654895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.655288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.655325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.655675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.655704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.656065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.656095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.656470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.656498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.656870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.656897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.657335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.657365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.657718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.657745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.657993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.658022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.658366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.658395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.658767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.658796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.659140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.659170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.659524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.659553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.659909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.659938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.660316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.660346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.660705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.660734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.906 [2024-10-08 17:50:03.661116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.906 [2024-10-08 17:50:03.661147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.906 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.661491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.661520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.661824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.661860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.662204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.662234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.662585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.662615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.662983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.663012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.663392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.663420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.663657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.663688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.664053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.664084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.664458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.664488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.664844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.664871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.665116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.665149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.665517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.665547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.665911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.665939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.666230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.666260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.666609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.666638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.666943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.666971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.667334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.667363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.667725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.667753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.668136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.668165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.668523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.668551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.668921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.668950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.669188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.669217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.669579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.669608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.669972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.670013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.670371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.670405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.670739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.670767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.671114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.671144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.671412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.671440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.671780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.671808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.672160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.672189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.672612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.672640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.673054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.673084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.673427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.673457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.673827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.673855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.674216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.674246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.674575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.674603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.674849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.674880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.675321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.675351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.675730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.675759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.907 qpair failed and we were unable to recover it. 00:34:11.907 [2024-10-08 17:50:03.676118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.907 [2024-10-08 17:50:03.676147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.676493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.676521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.676892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.676920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.677282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.677310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.677671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.677700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.678069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.678100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.678474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.678501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.678866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.678893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.679263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.679293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.679628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.679656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.680018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.680049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.680436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.680464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.680713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.680742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.680992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.681022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.681360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.681389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.681628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.681659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.682033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.682063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.682412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.682441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.682787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.682815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.683160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.683189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.683551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.683579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.683963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.684001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.684352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.684380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.684749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.684777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.685145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.685174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.685529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.685562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.685917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.685946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.686347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.686377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.686743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.686770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.687112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.687143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.687501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.687530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.687868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.687896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.688254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.688283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.688648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.688677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.689055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.689084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.908 [2024-10-08 17:50:03.689454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.908 [2024-10-08 17:50:03.689482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.908 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.689843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.689872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.690248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.690278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.690711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.690740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.691104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.691135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.691503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.691531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.691895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.691923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.692158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.692191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.692550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.692579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.692847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.692874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.693201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.693231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.693570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.693597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.693931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.693959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.694342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.694371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.694730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.694758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.695105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.695136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.695500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.695528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.695898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.695928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.696286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.696316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.696546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.696578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.696949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.696986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.697319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.697348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.697725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.697753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.698090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.698120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.698497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.698525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.698898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.698926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.699290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.699319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.699676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.699704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.699944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.699984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.700334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.700363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.700723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.700758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.701008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.701041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.701255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.701284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.701667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.701695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.702063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.702093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.702447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.702476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.702807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.702836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.703207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.703238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.703600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.703628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.703852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.703884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.909 [2024-10-08 17:50:03.704249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.909 [2024-10-08 17:50:03.704279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.909 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.704648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.704677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.705031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.705060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.705409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.705437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.705816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.705845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.706158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.706187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.706542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.706570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.706833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.706861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.707208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.707238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.707617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.707645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.708017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.708046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.708450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.708478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.708878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.708905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.709257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.709287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.709525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.709553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.709938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.709966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.710410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.710439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.710866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.710901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.711261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.711291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.711653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.711682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.712050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.712080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.712504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.712532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.712765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.712796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.713160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.713190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.713544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.713573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.713934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.713963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.714331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.714360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.714712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.714741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.714968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.715010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.715403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.715431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.715784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.715812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.716225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.716256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.716613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.716641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.717004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.717033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.717389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.717419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.717724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.717762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.718123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.718153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.718517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.718546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.718905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.718934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.719339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.719369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.910 [2024-10-08 17:50:03.719742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.910 [2024-10-08 17:50:03.719771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.910 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.720140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.720170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.720527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.720556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.720927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.720955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.721325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.721354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.721563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.721593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.721957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.721995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.722393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.722422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.722780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.722808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.723171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.723199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.723634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.723662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.723902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.723930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.724296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.724327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.724664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.724693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.725032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.725062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.725439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.725467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.725831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.725859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.726137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.726173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.726405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.726435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.726808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.726837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.727201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.727231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.727590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.727618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.728003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.728033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.728364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.728391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.728633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.728664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.728905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.728936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.729336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.729366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.729718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.729747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.730118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.730147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.730516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.730545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.730892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.730921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.731294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.731324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.731550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.731580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.731931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.731960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.732298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.732327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.732686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.732715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.733086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.733116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.733463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.733492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.733835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.733864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.734200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.734231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.911 [2024-10-08 17:50:03.734571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.911 [2024-10-08 17:50:03.734599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.911 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.734957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.734994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.735353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.735381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.735737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.735766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.736035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.736066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.736434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.736463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.736812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.736841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.737222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.737251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.737503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.737532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.737886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.737916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 562998 Killed "${NVMF_APP[@]}" "$@" 00:34:11.912 [2024-10-08 17:50:03.738253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.738284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.738622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.738651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:11.912 [2024-10-08 17:50:03.739024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.739054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:11.912 [2024-10-08 17:50:03.739511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.739540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:11.912 [2024-10-08 17:50:03.739823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.739852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:11.912 [2024-10-08 17:50:03.740219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.912 [2024-10-08 17:50:03.740251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.740623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.740652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.741003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.741033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.741411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.741440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.741595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.741623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.741993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.742024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.742251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.742283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.742638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.742666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.742874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.742904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.743248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.743279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.743643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.743671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.744062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.744094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.744468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.744496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.744737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.744767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.745144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.745177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.745544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.745573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.745916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.745944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.746329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.746359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.746725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.912 [2024-10-08 17:50:03.746753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.912 qpair failed and we were unable to recover it. 00:34:11.912 [2024-10-08 17:50:03.747117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.747146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.747500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.747529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.747938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.747967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.748338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.748368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.748733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.748762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=563882 00:34:11.913 [2024-10-08 17:50:03.749123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.749155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 563882 00:34:11.913 [2024-10-08 17:50:03.749514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.749553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 563882 ']' 00:34:11.913 [2024-10-08 17:50:03.749933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.749965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:11.913 [2024-10-08 17:50:03.750351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.750384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:11.913 [2024-10-08 17:50:03.750787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.750819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 17:50:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.913 [2024-10-08 17:50:03.751113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.751145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.751397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.751433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.751801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.751834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.752162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.752195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.752595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.752625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.752907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.752937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.753306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.753340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.753703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.753733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.754020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.754053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.754454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.754486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.754875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.754905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.755281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.755312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.755576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.755610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.755967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.756012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.756404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.756434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.756699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.756732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.756967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.757024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.757442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.757473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.757707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.757738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.757932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.757970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.758359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.758392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.758761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.758792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.759158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.759190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.759545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.759575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.759831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.759861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.913 [2024-10-08 17:50:03.760237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.913 [2024-10-08 17:50:03.760269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.913 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.760636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.760666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.760933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.760963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.761221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.761251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.761612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.761643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.762092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.762123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.762476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.762508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.762879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.762913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.763212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.763244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.763616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.763646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.764000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.764031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.764238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.764268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.764505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.764536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.764900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.764930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.765195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.765225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.765609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.765641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.766012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.766044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.766405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.766434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.766805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.766834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.767195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.767226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.767582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.767611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.767995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.768027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.768407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.768435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.768671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.768701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.769068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.769102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.769334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.769365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.769637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.769665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.770022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.770053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.770426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.770458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.770793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.770822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.771173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.771204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.771342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.771374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.771801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.771830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.772080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.772109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.772455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.772492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.772814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.772843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.773205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.773237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.773577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.773608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.773859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.773888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.774240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.774271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.914 [2024-10-08 17:50:03.774647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.914 [2024-10-08 17:50:03.774677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.914 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.775084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.775114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.775457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.775489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.775875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.775904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.776102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.776132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.776391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.776421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.776683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.776712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.777055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.777087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.777362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.777395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.777630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.777659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.777912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.777941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.778418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.778449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.778862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.778893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.779138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.779169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.779429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.779459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.779729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.779758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.780070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.780101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.780487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.780519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.780866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.780896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.781252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.781283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.781546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.781576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.781800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.781834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.782195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.782229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.782614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.782644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.783018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.783050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.783309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.783338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.783720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.783748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.784104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.784136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.784392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.784423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.784565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.784596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.784971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.785014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.785422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.785452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.785690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.785720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.785963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.786015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.786236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.786274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.786659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.786689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.787060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.787091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.787322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.787352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.787712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.787743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.788112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.788142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.915 [2024-10-08 17:50:03.788535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.915 [2024-10-08 17:50:03.788564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.915 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.788912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.788941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.789208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.789241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.789601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.789631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.789998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.790029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.790260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.790291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.790546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.790574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.790935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.790965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.791216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.791248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.791612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.791642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.791932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.791963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.792333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.792364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.792721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.792751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.793097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.793130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.793348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.793378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.793685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.793715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.794087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.794117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.794465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.794508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.794865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.794895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.795286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.795319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.795678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.795709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.796012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.796043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.796273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.796306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.796596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.796626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.797070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.797109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.797354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.797385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.797748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.797777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.798238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.798268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.798649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.798678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.798899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.798931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.799339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.799369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.799742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.799771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.800150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.800180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.800579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.800609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.800957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.801006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.916 [2024-10-08 17:50:03.801386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.916 [2024-10-08 17:50:03.801415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.916 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.801786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.801815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.802159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.802189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.802587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.802616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.802875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.802903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.803284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.803314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.803685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.803714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.803927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.803955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.804342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.804371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.804659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.804687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.804952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.805006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.805406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.805436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.805696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.805725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.806097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.806129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.806365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.806398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.806762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.806794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.807133] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:34:11.917 [2024-10-08 17:50:03.807169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.807204] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.917 [2024-10-08 17:50:03.807205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.807585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.807615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.807989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.808020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.808204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.808233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.808592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.808623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.808854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.808887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.809169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.809201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.809579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.809610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.809971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.810034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.810397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.810429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.810793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.810825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.811211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.811247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.811658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.811688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.812032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.812063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.812411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.812440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.812803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.812832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.813205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.813235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.813583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.813613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.813987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.814018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.814401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.814431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.814720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.814749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.815131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.815163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.815514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.815544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.917 [2024-10-08 17:50:03.815838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.917 [2024-10-08 17:50:03.815867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.917 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.816212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.816244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.816619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.816647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.817007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.817038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.817303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.817332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.817561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.817590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.817934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.817963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.818375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.818405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.818758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.818786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.819035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.819068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.819393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.819423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.819788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.819817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.820256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.820293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.820677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.820708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.821013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.821043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.821327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.821356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.821702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.821730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.822076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.822107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.822475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.822504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.822845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.822875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.823041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.823070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.823307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.823341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.823718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.823749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.824021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.824052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.824396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.824424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.824800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.824830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.825284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.825316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.825696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.825725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.826017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.826047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.826395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.826424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.826833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.826862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.827119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.827154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.827380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.827412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.827790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.827820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.828226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.828256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.828616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.828644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.829033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.829063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.829227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.829257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.829500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.829531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.918 [2024-10-08 17:50:03.829873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.918 [2024-10-08 17:50:03.829904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.918 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.830304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.830334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.830680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.830711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.830944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.830973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.831283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.831312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.831651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.831681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.832056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.832086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.832463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.832494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.832885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.832914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.833276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.833308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.833672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.833701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.834073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.834105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.834248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.834278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.834653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.834689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.835043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.835075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.835513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.835543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.836005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.836036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.836417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.836448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.836810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.836839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.837105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.837135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.837416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.837445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.837803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.837833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.838185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.838216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.838584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.838614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.838984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.839014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.839393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.839422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.839763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.839792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.840162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.840192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.840548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.840578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.840938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.840967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.841262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.841293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.841546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.841576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.841926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.841955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.842318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.842348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.842713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.842742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.843096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.843129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.843471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.843501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.843864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.843895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.844243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.844273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.844621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.844651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.919 [2024-10-08 17:50:03.845087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.919 [2024-10-08 17:50:03.845117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.919 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.845480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.845509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.845801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.845830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.846204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.846234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.846499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.846528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.846994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.847025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.847240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.847271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.847634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.847663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.848044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.848074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.848455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.848483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.848856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.848885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.849145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.849174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.849545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.849578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.849999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.850037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.850426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.850455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.850828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.850857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.851299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.851331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.851687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.851717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.852057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.852089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.852441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.852470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.852717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.852749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.852992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.853024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.853381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.853410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.853772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.853800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.854281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.854310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.854657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.854686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.855056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.855086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.855480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.855509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.855734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.855766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.856123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.856152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.856510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.856539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.856905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.856934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.857358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.857388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.857747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.857776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.858032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.858062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.858478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.858507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.858739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.858767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.859127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.859158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.859505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.859535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.859901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.859930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.920 [2024-10-08 17:50:03.860314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.920 [2024-10-08 17:50:03.860351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.920 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.860712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.860742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.861104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.861134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.861516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.861545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.861904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.861932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.862387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.862418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.862768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.862797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.863163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.863193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.863621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.863651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.863995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.864025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.864455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.864484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.864816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.864846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.865156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.865186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.865449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.865478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.865837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.865866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.866108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.866139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.866458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.866487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.866858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.866888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.867310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.867341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.867582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.867610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.867969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.868011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.868345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.868377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.868730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.868759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.869123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.869153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.869538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.869567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.869925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.869954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.870350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.870380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.870770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.870800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.871186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.871216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.921 qpair failed and we were unable to recover it. 00:34:11.921 [2024-10-08 17:50:03.871575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.921 [2024-10-08 17:50:03.871604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.871969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.872009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.872410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.872440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.872766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.872797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.873153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.873184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.873552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.873581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.874030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.874061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.874435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.874464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.874814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.874843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.875199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.875231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.875470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.875499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.875756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.875793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.876208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.876239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:11.922 [2024-10-08 17:50:03.876580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.922 [2024-10-08 17:50:03.876610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:11.922 qpair failed and we were unable to recover it. 00:34:12.196 [2024-10-08 17:50:03.876992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.196 [2024-10-08 17:50:03.877025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.196 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.877391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.877421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.877659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.877688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.878041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.878072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.878435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.878464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.878835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.878865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.879252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.879284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.879612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.879641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.880012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.880043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.880429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.880458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.880848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.880877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.881276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.881306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.881721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.881750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.882117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.882146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.882517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.882546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.882908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.882937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.883111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.883145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.883520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.883548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.883907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.883937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.884322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.884353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.884760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.884789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.885137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.885169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.885511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.885541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.885861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.885898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.886256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.886287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.886645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.886674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.886927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.886955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.887353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.887384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.887759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.887790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.888018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.888049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.888307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.888337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.888682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.888712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.889041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.889071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.889465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.889495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.889882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.889913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.890257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.890287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.890648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.890678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.891044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.891080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.891336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.891367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.197 [2024-10-08 17:50:03.891577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.197 [2024-10-08 17:50:03.891606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.197 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.891802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.891830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.892199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.892230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.892480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.892509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.892942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.892972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.893391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.893422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.893786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.893817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.894203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.894235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.894600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.894630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.894997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.895027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.895418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.895449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.895802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.895832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.896076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.896109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.896325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.896358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.896492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.896530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.896769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.896798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.897153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.897184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.897559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.897590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.897927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.897957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.898332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.898364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.898710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.898740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.899108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.899139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.899305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.198 [2024-10-08 17:50:03.899475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.899504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.899864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.899895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.900160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.900192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.900422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.900456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.900804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.900836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.901200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.901230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.901346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.901376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.901736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.901767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.902209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.902241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.902580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.902610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.902961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.903003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.903401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.903433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.903785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.903816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.904082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.904114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.904454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.904485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.904855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.904886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.905311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.905343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.905684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.905714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.198 [2024-10-08 17:50:03.906064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.198 [2024-10-08 17:50:03.906096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.198 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.906459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.906488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.906709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.906737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.907127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.907160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.907412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.907441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.907796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.907826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.908045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.908077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.908471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.908502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.908867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.908898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.909287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.909317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.909657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.909688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.910056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.910095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.910450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.910488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.910811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.910841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.911196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.911229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.911570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.911601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.911890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.911919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.912314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.912347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.912715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.912746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.913110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.913141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.913385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.913415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.913779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.913809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.914053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.914086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.914456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.914485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.914857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.914887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.915240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.915271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.915634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.915663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.916031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.916062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.916309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.916341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.916711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.916742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.917008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.917037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.917425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.917454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.917704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.917732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.917997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.918028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.918382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.918411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.918758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.918787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.919140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.919170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.919529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.919557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.919918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.919947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.920318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.199 [2024-10-08 17:50:03.920348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.199 qpair failed and we were unable to recover it. 00:34:12.199 [2024-10-08 17:50:03.920701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.920731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.921108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.921138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.921495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.921525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.921911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.921940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.922329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.922360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.922747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.922776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.923115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.923145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.923485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.923515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.923860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.923890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.924228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.924258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.924521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.924549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.924924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.924959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.925396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.925426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.925817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.925845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.926197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.926227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.926585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.926615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.926987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.927019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.927434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.927464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.927836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.927865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.928209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.928238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.928488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.928520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.928786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.928815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.929257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.929288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.929522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.929551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.929923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.929953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.930334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.930365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.930728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.930756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.931009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.931040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.931457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.931486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.931860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.931889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.932264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.932296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.932662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.932692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.933077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.933107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.933492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.933520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.933888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.933917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.934161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.934190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.934538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.934567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.934923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.934952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.935320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.935350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.935712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.200 [2024-10-08 17:50:03.935742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.200 qpair failed and we were unable to recover it. 00:34:12.200 [2024-10-08 17:50:03.936095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.936126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.936484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.936513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.936896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.936924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.937279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.937309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.937735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.937764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.938108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.938139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.938515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.938544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.938905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.938935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.939297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.939327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.939684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.939726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.940093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.940132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.940491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.940526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.940880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.940910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.941351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.941382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.941734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.941763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.942140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.942169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.942543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.942574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.942919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.942950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.943360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.943391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.943746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.943776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.944015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.944048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.944434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.944464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.944845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.944877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.945225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.945256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.945629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.945660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.946067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.946100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.946486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.946517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.946889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.946918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.948014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.948182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.948632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.948666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.948903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.948936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.949328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.949360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.949736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.949766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.201 [2024-10-08 17:50:03.950137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.201 [2024-10-08 17:50:03.950166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.201 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.950561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.950591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.950953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.950992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.951346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.951375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.951638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.951669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.952039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.952073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.952414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.952445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.952800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.952830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.953215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.953245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.953685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.953716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.953911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.953942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.954228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.954259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.954656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.954686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.955057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.955087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.955453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.955484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.955821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.955850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.956205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.956237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.956586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.956614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.956996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.957034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.957293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.957322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.957587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.957617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.958009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.958040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.958299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.958328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.958594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.958623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.958991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.959021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.959273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.959304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.959675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.959705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.959967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.960012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.960389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.960419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.960655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.960683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.961035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.961066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.961317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.961345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.961599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.961628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.962004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.962035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.962429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.962459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.962822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.962851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.963212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.963242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.963611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.963640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.964006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.964036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.964401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.964430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.964693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.202 [2024-10-08 17:50:03.964721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.202 qpair failed and we were unable to recover it. 00:34:12.202 [2024-10-08 17:50:03.965052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.965082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.965449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.965477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.965834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.965863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.966236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.966266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.966639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.966670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.966933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.966963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.967226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.967256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.967619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.967648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.968002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.968032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.968252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.968283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.968647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.968676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.969038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.969069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.969282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.969310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.969541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.969572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.969934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.969964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.970297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.970327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.970693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.970723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.971098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.971134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.971501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.971530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.971901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.971929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.972295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.972325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.972613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.972642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.973011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.973041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.973445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.973474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.973718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.973747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.974175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.974205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.974554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.974583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.974930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.974958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.975341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.975372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.975728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.975757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.976146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.976176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.976546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.976576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.976897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.976925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.977295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.977325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.977570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.977603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.977805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.977834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.978225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.978256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.978492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.978521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.978752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.978780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.203 [2024-10-08 17:50:03.979172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.203 [2024-10-08 17:50:03.979201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.203 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.979563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.979592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.979953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.979993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.980328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.980357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.980717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.980745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.980988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.981022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.981401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.981431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.981795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.981825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.982184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.982215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.982624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.982653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.983008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.983038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.983403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.983431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.983657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.983685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.984117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.984146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.984515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.984545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.984908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.984937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.985313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.985343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.985704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.985733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.986104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.986141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.986490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.986519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.986893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.986922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.987307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.987337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.987676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.987705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.987826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.987856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.988293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.988323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.988682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.988712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.989080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.989110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.989485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.989514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.989885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.989912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.990269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.990299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.990662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.990691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.991053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.991082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.991460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.991491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.991864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.991893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.992267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.992298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.992746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.992774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.993022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.993054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 [2024-10-08 17:50:03.993041] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.993089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.204 [2024-10-08 17:50:03.993099] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.204 [2024-10-08 17:50:03.993106] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.204 [2024-10-08 17:50:03.993113] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.204 [2024-10-08 17:50:03.993431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.993459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.993711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.204 [2024-10-08 17:50:03.993740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.204 qpair failed and we were unable to recover it. 00:34:12.204 [2024-10-08 17:50:03.994103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.994134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.994500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.994530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.994897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.994926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.995289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.995319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.995370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:12.205 [2024-10-08 17:50:03.995575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:12.205 [2024-10-08 17:50:03.995737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.995739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:12.205 [2024-10-08 17:50:03.995767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.995738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:12.205 [2024-10-08 17:50:03.996044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.996075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.996302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.996331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.996707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.996735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.997094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.997125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.997471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.997500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.997877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.997906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.998069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.998099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.998356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.998384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.998787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.998816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.999066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.999096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.999479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.999508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:03.999872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:03.999908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.000304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.000334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.000694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.000723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.001112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.001142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.001397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.001425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.001815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.001843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.002084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.002120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.002511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.002541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.002762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.002790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.003165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.003195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.003426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.003458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.003822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.003852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.004013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.004046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.004421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.004449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.004704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.004735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.005071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.005103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.005487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.205 [2024-10-08 17:50:04.005516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.205 qpair failed and we were unable to recover it. 00:34:12.205 [2024-10-08 17:50:04.005769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.005798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.006054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.006084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.006311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.006342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.006717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.006747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.007145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.007176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.007522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.007550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.007813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.007842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.008106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.008139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.008533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.008562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.008830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.008859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.009136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.009171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.009515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.009546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.009884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.009914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.010157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.010187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.010573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.010602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.010950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.010995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.011376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.011406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.011750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.011780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.012018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.012053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.012313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.012342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.012487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.012515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.012900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.012930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.013385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.013418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.013670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.013707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.014095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.014126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.014266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.014294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.014607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.014637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.014991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.015021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.015280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.015309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.015569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.015600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.015853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.015882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.016278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.016308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.016549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.016578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.016923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.016952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.017180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.017209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.017354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.017382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.017782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.017811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.018297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.018328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.018491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.018520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.018664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.206 [2024-10-08 17:50:04.018695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.206 qpair failed and we were unable to recover it. 00:34:12.206 [2024-10-08 17:50:04.018813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.018843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.019098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.019136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.019490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.019519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.019868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.019897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.020139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.020169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.020558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.020587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.020939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.020967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.021403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.021432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.021787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.021818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.021940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.021967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.022333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.022364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.022604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.022633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.022867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.022898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.023248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.023281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.023536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.023564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.023821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.023850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.024131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.024161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.024551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.024580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.024931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.024960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.025373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.025403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.025762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.025793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.026165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.026196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.026541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.026571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.026955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.027002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.027247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.027279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.027667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.027697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.028065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.028097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.028380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.028408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.028818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.028847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.029111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.029141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.029530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.029560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.029671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.029699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.030083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.030113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.030523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.030552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.030817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.030846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.031201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.031232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.031581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.031612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.031781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.031810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.032198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.032229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.032585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.207 [2024-10-08 17:50:04.032614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.207 qpair failed and we were unable to recover it. 00:34:12.207 [2024-10-08 17:50:04.032833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.032863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.032970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.033013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.033242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.033275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.033485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.033516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.033889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.033920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.034155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.034186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.034557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.034587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.034963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.035006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.035257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.035286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.035541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.035570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.035842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.035871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.036233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.036264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.036489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.036518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.036782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.036811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.037160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.037192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.037412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.037441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.037813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.037843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.038253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.038283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.038659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.038688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.038914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.038943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.039336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.039367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.039796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.039825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.040034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.040065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.040296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.040332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.040685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.040714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.041107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.041138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.041536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.041565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.041785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.041813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.042026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.042056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.042424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.042454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.042811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.042840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.043202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.043234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.043473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.043501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.043885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.043913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.044265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.044296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.044557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.044585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.044802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.044830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.045224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.045254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.045492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.045520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.045878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.208 [2024-10-08 17:50:04.045908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.208 qpair failed and we were unable to recover it. 00:34:12.208 [2024-10-08 17:50:04.046161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.046191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.046439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.046469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.046899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.046929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.047178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.047208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.047549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.047579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.047945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.047987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.048360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.048389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.048651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.048678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.049041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.049072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.049334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.049362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.049600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.049631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.050001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.050034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.050297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.050329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.050737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.050767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.050995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.051025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.051275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.051304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.051551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.051579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.051905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.051935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.052346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.052377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.052611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.052639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.052893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.052926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.053386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.053419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.053801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.053831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.054281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.054319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.054670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.054700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.055186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.055217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.055468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.055500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.055628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.055655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.055812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.055850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.056070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.056100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.056483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.056511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.056777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.056806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.057025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.057056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.057403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.057433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.057777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.057806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.058145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.058175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.058415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.058447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.058678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.058708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.209 [2024-10-08 17:50:04.058910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.209 [2024-10-08 17:50:04.058938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.209 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.059111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.059139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.059527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.059555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.059799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.059829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.060190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.060220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.060519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.060548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.060905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.060934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.061272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.061302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.061663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.061692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.062023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.062052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.062431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.062459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.062698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.062726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.063131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.063162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.063396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.063428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.063654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.063684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.064036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.064065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.064481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.064510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.068366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.068471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.068912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.068950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.069382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.069416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.069629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.069659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.070013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.070045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.070415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.070445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.070802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.070831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.071082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.071112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.071455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.071498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.071853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.071883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.072148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.072179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.072539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.072568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.072754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.072783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.072872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.072900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.073150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.073181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.073518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.073548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.073901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.073930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.074378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.074412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.074836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.074866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.075236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.075265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.210 [2024-10-08 17:50:04.075512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.210 [2024-10-08 17:50:04.075549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.210 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.075773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.075803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.076043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.076074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.076312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.076343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.076565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.076596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.076959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.077002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.077352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.077381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.077783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.077813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.078239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.078269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.078482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.078511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.078727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.078756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.079000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.079032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.079371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.079399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.079851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.079881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.080226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.080257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.080624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.080654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.081013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.081045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.081446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.081476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.081715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.081747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.081986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.082016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.082416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.082446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.082676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.082705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.083001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.083033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.083414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.083444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.083680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.083708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.083988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.084023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.084281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.084311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.084754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.084784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.085138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.085176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.085390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.085423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.085839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.085870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.086124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.086157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.086539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.086569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.086808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.086839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.087173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.087205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.087305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.087333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.087730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.087760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.087961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.088012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.088387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.088416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.088657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.211 [2024-10-08 17:50:04.088688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.211 qpair failed and we were unable to recover it. 00:34:12.211 [2024-10-08 17:50:04.088940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.088970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.089386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.089415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.089767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.089796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.090172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.090204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.090645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.090674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.090931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.090959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.091225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.091256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.091479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.091508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.091779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.091808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.092102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.092132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.092485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.092516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.092882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.092910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.093282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.093313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.093531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.093560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.093784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.093812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.094202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.094233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.094604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.094634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.094870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.094898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.095142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.095172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.095426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.095460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.095838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.095868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.096217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.096248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.096586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.096616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.096995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.097026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.097245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.097274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.097540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.097572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.097936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.097965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.098267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.098297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.098675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.098711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.098964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.099011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.099383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.099412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.099644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.099674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.100030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.100062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.100305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.100336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.100706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.100738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.101092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.101123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.101491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.101521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.101888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.101918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.102170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.102201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.102298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.212 [2024-10-08 17:50:04.102328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.212 qpair failed and we were unable to recover it. 00:34:12.212 [2024-10-08 17:50:04.102676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.102707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.103094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.103125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.103364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.103393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.103651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.103682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.104039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.104071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.104300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.104330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.104703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.104733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.105101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.105131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.105487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.105518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.105740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.105768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.106125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.106156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.106518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.106549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.106779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.106811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.107196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.107227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.107461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.107491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.107862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.107896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.108116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.108147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.108514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.108543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.108800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.108830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.109229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.109259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.109492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.109523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.109898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.109928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.110307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.110338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.110580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.110609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.110847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.110876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.111144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.111175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.111534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.111565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.111664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.111693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.112037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.112165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.112471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.112518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.112937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.112968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.113257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.113288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.113567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.113597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.113835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.113864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.114243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.114274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.114642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.114672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.114946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.114993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.115288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.115317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.115671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.115701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.115946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.213 [2024-10-08 17:50:04.115989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.213 qpair failed and we were unable to recover it. 00:34:12.213 [2024-10-08 17:50:04.116244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.116274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.116640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.116669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.116792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.116820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.117211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.117242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.117531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.117560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.117764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.117793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.118152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.118181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.118454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.118489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.118861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.118890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.119162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.119191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.119570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.119599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.119982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.120013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.120107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.120134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.120236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xece0f0 (9): Bad file descriptor 00:34:12.214 [2024-10-08 17:50:04.120538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.120601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.121014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.121056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.121573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.121676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.122273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.122377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.122827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.122866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.123416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.123520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.123968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.124025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.124405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.124436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.124805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.124836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.125189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.125221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.125569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.125599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.125806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.125836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.126242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.126273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.126641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.126669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.126903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.126932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.127205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.127236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.127481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.127510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.127689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.127719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.127932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.127963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.128210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.128240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.128450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.128480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.214 [2024-10-08 17:50:04.128846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.214 [2024-10-08 17:50:04.128876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.214 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.129144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.129177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.129542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.129573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.129945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.129983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.130230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.130262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.130630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.130661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.130918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.130948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.131336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.131375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.131634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.131665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.131912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.131945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.132375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.132408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.132733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.132764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.133091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.133122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.133497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.133527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.133907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.133937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.134304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.134335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.134700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.134729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.134941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.134971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.135317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.135349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.135604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.135633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.136006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.136037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.136414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.136447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.136741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.136771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.137140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.137172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.137539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.137567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.137918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.137948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.138210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.138241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.138463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.138493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.138870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.138901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.139287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.139319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.139631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.139660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.139877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.139906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.140155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.140186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.140392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.140421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.140798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.140831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.141203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.141234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.141352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.141384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.141649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.141678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.142053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.142085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.142445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.142474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.215 [2024-10-08 17:50:04.142698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.215 [2024-10-08 17:50:04.142730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.215 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.143101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.143132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.143484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.143515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.143886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.143915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.144175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.144210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.144408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.144438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.144659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.144690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.144920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.144967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.145083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.145115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.145492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.145523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.145729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.145758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.146123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.146155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.146409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.146441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.146820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.146849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.147127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.147157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.147392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.147422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.147783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.147812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.148198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.148228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.148567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.148597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.148957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.148997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.149360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.149388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.149755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.149786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.150050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.150080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.150485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.150514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.150716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.150745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.151111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.151141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.151369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.151398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.151645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.151673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.151886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.151914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.152289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.152320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.152548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.152577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.152950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.152993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.153358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.153388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.153746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.153775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.154008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.154039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.154266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.154294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.154535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.154564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.154920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.154948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.155342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.155372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.155732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.155761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.216 [2024-10-08 17:50:04.155899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.216 [2024-10-08 17:50:04.155928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.216 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.156174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.156205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.156434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.156462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.156848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.156878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.157140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.157169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.157532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.157561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.157932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.157962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.158339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.158376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.158754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.158784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.159003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.159033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.159456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.159484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.159849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.159878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.160261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.160292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.160664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.160693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.161067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.161096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.161475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.161504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.161866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.161896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.162263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.162292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.162637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.162666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.163037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.163067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.163437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.163466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.163830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.163860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.164091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.164121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.164468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.164497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.164871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.164901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.165124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.165155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.165447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.165481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.165720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.165750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.166159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.166189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.166401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.166431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.166750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.166779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.167159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.167190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.167426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.167454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.167568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.167599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.167955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.167996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.168300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.168329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.168533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.168562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.168988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.169019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.169398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.169427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.169787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.169816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.217 [2024-10-08 17:50:04.170155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.217 [2024-10-08 17:50:04.170188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.217 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.170406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.170436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.170664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.170693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.170940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.170967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.171208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.171241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.171588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.171618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.171962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.172003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.172408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.172444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.172786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.172816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.173185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.173216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.173458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.173487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.173838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.173866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.174264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.174294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.174687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.174717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.174941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.174971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.175225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.175256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.175631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.175662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.175897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.175926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.176273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.176304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.218 [2024-10-08 17:50:04.176523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.218 [2024-10-08 17:50:04.176551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.218 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.176928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.176961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.177303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.177333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.177605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.177633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.177994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.178024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.178402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.178432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.178771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.178801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.179027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.179056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.179429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.179459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.179839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.179869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.180072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.180102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.180335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.180364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.180568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.180597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.180966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.181002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.181276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.181304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.181668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.181700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.182044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.182075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.182417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.182447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.182825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.182855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.183079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.183110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.183358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.183393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.183626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.183660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.184047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.184079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.184437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.184466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.184815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.184845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.185067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.185096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.493 qpair failed and we were unable to recover it. 00:34:12.493 [2024-10-08 17:50:04.185345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.493 [2024-10-08 17:50:04.185374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.185756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.185786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.186157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.186195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.186555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.186584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.186944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.186983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.187346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.187374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.187743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.187771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.188025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.188056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.188404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.188433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.188796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.188826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.189033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.189064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.189300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.189328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.189671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.189699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.190073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.190103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.190478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.190507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.190876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.190905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.191124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.191154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.191279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.191306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.191549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.191577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.191802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.191831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.192200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.192230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.192585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.192613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.192986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.193017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.193368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.193406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.193738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.193767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.194145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.194175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.194416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.194445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.194797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.194825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.195173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.195204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.195432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.195462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.195802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.195833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.196196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.196225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.196588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.196616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.196869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.196898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.197152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.197181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.197537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.197566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.197938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.197967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.198224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.198253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.198481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.198513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.198849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.198878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.494 [2024-10-08 17:50:04.199223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.494 [2024-10-08 17:50:04.199254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.494 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.199618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.199649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.200099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.200136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.200482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.200512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.200887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.200915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.201136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.201164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.201480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.201508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.201727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.201756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.201965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.202020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.202397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.202427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.202774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.202802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.203156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.203186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.203563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.203591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.203815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.203847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.204199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.204239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.204579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.204608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.204833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.204863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.205114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.205143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.205358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.205387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.205512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.205539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.205898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.205927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.206309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.206340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.206555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.206583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.206674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.206702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.207034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.207129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.207382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.207415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.207667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.207697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.208255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.208358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.208686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.208725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.209029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.209070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.209440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.209470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.209709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.209738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.210131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.210162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.210488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.210518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.210972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.211020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.211260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.211288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.211525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.211554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.211760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.211788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.212156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.212186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.212412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.495 [2024-10-08 17:50:04.212440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.495 qpair failed and we were unable to recover it. 00:34:12.495 [2024-10-08 17:50:04.212535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.212562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.212919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.212948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.213364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.213395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.213766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.213796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.214182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.214212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.214430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.214459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.214736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.214766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.215208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.215239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.215579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.215607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.215826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.215854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.216242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.216272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.216717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.216745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.217106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.217135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.217500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.217530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.217871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.217899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.218248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.218278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.218558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.218587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.218956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.218995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.219345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.219374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.219638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.219671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.220010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.220041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.220444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.220473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.220838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.220868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.221322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.221352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.221563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.221593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.221851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.221881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.222096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.222126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.222360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.222389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.222623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.222652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.223018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.223056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.223439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.223470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.223839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.223868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.224263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.224294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.224675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.224705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.225053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.225083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.225515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.225544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.225920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.225949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.226338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.226369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.226594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.226625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.227004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.496 [2024-10-08 17:50:04.227035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.496 qpair failed and we were unable to recover it. 00:34:12.496 [2024-10-08 17:50:04.227278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.227307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.227551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.227580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.227782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.227810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.228156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.228186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.228403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.228432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.228833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.228862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.229224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.229255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.229595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.229624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.230005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.230036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.230387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.230418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.230640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.230669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.230905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.230937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.231331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.231363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.231726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.231755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.232132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.232164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.232510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.232539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.232760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.232789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.233035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.233065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.233303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.233332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.233708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.233737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.234106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.234138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.234509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.234537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.234892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.234921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.235290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.235321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.235598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.235626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.235981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.236012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.236379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.236409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.236634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.236663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.236884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.236912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.237315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.237350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.237706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.237737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.237992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.238023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.238260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.238288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.497 [2024-10-08 17:50:04.238717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.497 [2024-10-08 17:50:04.238746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.497 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.239088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.239119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.239336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.239364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.239598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.239626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.239845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.239884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.240224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.240254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.240616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.240645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.241019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.241049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.241412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.241439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.241834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.241862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.242231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.242264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.242625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.242654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.243027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.243058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.243445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.243473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.243718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.243750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.244148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.244178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.244548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.244579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.244825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.244854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.245089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.245118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.245248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.245276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.245603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.245632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.246003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.246033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.246398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.246426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.246808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.246838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.247171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.247201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.247436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.247464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.247713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.247741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.248100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.248131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.248514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.248542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.248908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.248939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.249289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.249320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.249485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.249513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.249727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.249755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.250099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.250130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.250537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.250567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.250903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.250932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.251356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.251395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.251801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.251830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.252228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.252258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.252629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.252658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.498 [2024-10-08 17:50:04.253044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.498 [2024-10-08 17:50:04.253074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.498 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.253457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.253486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.253694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.253723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.253987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.254017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.254377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.254406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.254604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.254633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.255018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.255048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.255414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.255442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.255854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.255883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.256101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.256131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.256263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.256291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.256623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.256653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.256897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.256926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.257282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.257311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.257578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.257608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.257966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.258020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.258353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.258383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.258641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.258673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.258856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.258888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.259237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.259267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.259640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.259668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.260033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.260064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.260419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.260448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.260749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.260778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.261011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.261042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.261167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.261194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.261435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.261463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.261860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.261889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.262006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.262035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.262215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.262243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.262524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.262554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.262914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.262942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.263400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.263430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.263802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.263831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.264064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.264094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.264395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.264425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.264658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.264694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.264799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.264826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.265169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.265201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.265563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.265592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.499 qpair failed and we were unable to recover it. 00:34:12.499 [2024-10-08 17:50:04.265972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.499 [2024-10-08 17:50:04.266010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.266317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.266346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.266709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.266739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.267114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.267146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.267486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.267516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.267881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.267910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.268283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.268312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.268429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.268460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.268806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.268836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.268967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.269003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.269336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.269365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.269728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.269757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.270129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.270159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.270378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.270407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.270687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.270716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.271110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.271141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.271352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.271381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.271751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.271782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.272181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.272212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.272462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.272495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.272871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.272900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.273283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.273313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.273558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.273587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.273687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.273715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.274133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.274231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.274503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.274537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.274693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.274722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.275085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.275119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.275387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.275418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.275689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.275718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.276212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.276246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.276608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.276638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.277025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.277057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.277404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.277434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.277806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.277835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.278073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.278105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.278378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.278408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.278669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.278700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.278908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.278938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.279334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.279366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.500 [2024-10-08 17:50:04.279724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.500 [2024-10-08 17:50:04.279755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.500 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.279919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.279953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.280077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.280108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.280626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.280730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.281304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.281405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.281869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.281906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.282292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.282326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.282725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.282754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.282996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.283027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.283283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.283312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.283704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.283733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.284235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.284337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.284741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.284778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.285025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.285057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.285414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.285445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.285799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.285830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.286199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.286231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.286586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.286615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.286958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.286998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.287375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.287404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.287755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.287784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.288148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.288178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.288493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.288522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.288752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.288793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.288890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.288918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.289180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.289212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.289448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.289476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.289715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.289747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.289855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.289887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.290474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.290578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.290879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.290917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.291314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.291421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.291888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.291925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.292182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.292213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.292632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.292662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.292917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.292946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.293317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.293347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.293602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.293632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.293963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.294000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.294214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.501 [2024-10-08 17:50:04.294243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.501 qpair failed and we were unable to recover it. 00:34:12.501 [2024-10-08 17:50:04.294366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.294402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.294769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.294799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.294991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.295020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.295414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.295444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.295675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.295704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.296095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.296126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.296516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.296546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.296791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.296825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.297208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.297238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.297614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.297643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.298251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.298355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.298858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.298897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.299355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.299457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.299765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.299804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.299907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.299937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.300201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.300233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.300597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.300627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.301023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.301054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.301411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.301441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.301553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.301580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.301924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.301956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.302065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.302093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.302454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.302485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.302829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.302871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.303236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.303266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.303658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.303688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.304081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.304111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.304338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.304368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.304758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.304789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.305192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.305222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.305351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.305389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.305773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.305802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.306158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.306190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.306594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.306624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.502 qpair failed and we were unable to recover it. 00:34:12.502 [2024-10-08 17:50:04.307025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.502 [2024-10-08 17:50:04.307057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.307339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.307370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.307606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.307635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.307984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.308016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.308376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.308405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.308774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.308803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.309216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.309246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.309618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.309648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.309958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.309995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.310359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.310389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.310613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.310641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.310886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.310915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.311282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.311312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.311579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.311608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.311964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.312004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.312346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.312376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.312719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.312749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.312960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.313016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.313235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.313265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.313631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.313659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.314012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.314044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.314425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.314455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.314817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.314846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.315234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.315265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.315629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.315657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.316029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.316059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.316427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.316458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.316817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.316847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.317091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.317126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.317359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.317402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.317642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.317674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.318050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.318079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.318466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.318495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.318867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.318896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.319278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.319308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.319670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.319699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.320060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.320089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.320514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.320543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.320918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.320948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.321289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.321318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.503 [2024-10-08 17:50:04.321679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.503 [2024-10-08 17:50:04.321708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.503 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.322083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.322114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.322482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.322510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.322884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.322914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.323133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.323166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.323385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.323415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.323786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.323813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.324152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.324182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.324388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.324417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.324624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.324652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.324761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.324792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.325154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.325185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.325550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.325581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.325821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.325850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.326207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.326237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.326609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.326638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.327020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.327050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.327459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.327487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.327841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.327871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.328254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.328284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.328652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.328680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.328898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.328927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.329302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.329331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.329567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.329596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.330022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.330055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.330282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.330311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.330707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.330737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.331099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.331129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.331500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.331528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.331904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.331941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.332351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.332383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.332730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.332760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.332994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.333024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.333421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.333451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.333795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.333824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.334191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.334220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.334467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.334499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.334897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.334926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.335185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.335214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.335573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.335602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.504 qpair failed and we were unable to recover it. 00:34:12.504 [2024-10-08 17:50:04.335842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.504 [2024-10-08 17:50:04.335870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.336115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.336146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.336502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.336530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.336776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.336810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.337156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.337186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.337449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.337477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.337829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.337858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.338282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.338313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.338540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.338573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.338919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.338950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.339310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.339340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.339710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.339738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.339950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.339999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.340240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.340269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.340688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.340717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.340948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.340991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.341386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.341416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.341767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.341796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.342141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.342170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.342536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.342565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.342795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.342824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.343156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.343185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.343629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.343658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.343891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.343919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.344287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.344316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.344671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.344700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.345071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.345102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.345468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.345499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.345867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.345896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.346271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.346310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.346513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.346541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.346919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.346949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.347160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.347191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.347283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.347312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.347882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.348005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.348473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.348512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.348910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.348941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.349328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.349360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.349728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.349759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.349988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.505 [2024-10-08 17:50:04.350019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.505 qpair failed and we were unable to recover it. 00:34:12.505 [2024-10-08 17:50:04.350393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.350423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.350791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.350820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.351174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.351206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.351615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.351645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.351743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.351771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.352187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.352293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.352414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.352448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.352571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.352600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.352948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.352986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.353314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.353345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.353686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.353715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.354103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.354135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.354352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.354381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.354598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.354626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.355006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.355038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.355269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.355300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.355550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.355580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.355926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.355965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.356349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.356379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.356747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.356777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.357015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.357045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.357401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.357429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.357791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.357820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.358180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.358211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.358448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.358476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.358750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.358780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.359134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.359165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.359273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.359303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.359428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.359460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.359823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.359859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.360198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.360228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.360503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.360532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.360913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.360941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.361286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.361316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.361529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.361560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.506 [2024-10-08 17:50:04.361852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.506 [2024-10-08 17:50:04.361881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.506 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.362216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.362248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.362588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.362617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.362958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.362996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.363223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.363252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.363479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.363508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.363884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.363913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.364131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.364161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.364523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.364552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.364909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.364939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.365323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.365353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.365742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.365771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.366017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.366049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.366414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.366443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.366754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.366784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.367143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.367173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.367576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.367605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.367852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.367881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.368123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.368152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.368567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.368596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.368842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.368870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.369134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.369165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.369527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.369556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.369910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.369939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.370320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.370351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.370534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.370562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.370788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.370818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.371155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.371186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.371568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.371596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.371969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.372014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.372432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.372460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.372703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.372733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.373115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.373148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.373516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.373547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.373894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.373923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.374291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.374322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.374700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.374729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.375094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.375125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.375469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.375497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.375872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.375900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.507 [2024-10-08 17:50:04.376272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.507 [2024-10-08 17:50:04.376302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.507 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.376512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.376540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.376911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.376941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.377178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.377207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.377578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.377606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.377987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.378018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.378375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.378404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.378626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.378653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.379040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.379071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.379282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.379311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.379529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.379558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.379921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.379950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.380325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.380354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.380725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.380753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.381113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.381142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.381459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.381489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.381857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.381886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.382100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.382130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.382450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.382479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.382694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.382722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.382947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.382983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.383177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.383211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.383446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.383474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.383841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.383869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.384220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.384251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.384631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.384660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.385045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.385074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.385450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.385478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.385827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.385857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.386072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.386105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.386339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.386367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.386779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.386808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.387058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.387088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.387309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.387338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.387545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.387573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.387824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.387853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.388101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.388132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.388474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.388504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.388874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.388905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.389255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.508 [2024-10-08 17:50:04.389284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.508 qpair failed and we were unable to recover it. 00:34:12.508 [2024-10-08 17:50:04.389662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.389691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.390044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.390074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.390419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.390450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.390769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.390798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.391001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.391031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.391425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.391453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.391603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.391631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.392062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.392092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.392507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.392536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.392778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.392807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.393012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.393041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.393420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.393449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.393697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.393730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.394090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.394120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.394495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.394525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.394950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.394990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.395325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.395353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.395600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.395629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.395728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.395754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.396017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.396048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.396465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.396494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.396703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.396738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.397027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.397058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.397307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.397336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.397736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.397764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.398025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.398055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.398438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.398467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.398837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.398866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.399225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.399254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.399622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.399652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.399915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.399945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.400078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.400112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.400337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.400367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.400749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.400778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.401155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.401184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.401557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.401586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.401949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.401985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.402199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.402228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.402565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.402593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.402830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.509 [2024-10-08 17:50:04.402858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.509 qpair failed and we were unable to recover it. 00:34:12.509 [2024-10-08 17:50:04.403101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.403131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.403497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.403525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.403883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.403913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.404261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.404299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.404657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.404686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.404914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.404942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.405167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.405198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.405300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.405330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.405708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.405737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.406100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.406131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.406358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.406386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.406736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.406766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.407109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.407139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.407520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.407549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.407660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.407690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.407946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.407990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.408346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.408375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.408752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.408781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.409150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.409181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.409418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.409448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.409837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.409866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.410216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.410252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.410707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.410737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.411098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.411127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.411365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.411394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.411761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.411790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.412154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.412183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.412566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.412594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.412808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.412838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.413292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.413323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.413424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.413450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.413581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.413608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.414003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.414034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.414401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.414430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.414768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.510 [2024-10-08 17:50:04.414796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.510 qpair failed and we were unable to recover it. 00:34:12.510 [2024-10-08 17:50:04.415177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.415207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.415566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.415595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.415966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.416003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.416366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.416394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.416775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.416804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.417039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.417068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.417311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.417340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.417700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.417729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.418087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.418118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.418341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.418369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.418725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.418754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.419135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.419164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.419542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.419569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.419928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.419957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.420314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.420343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.420569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.420596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.420964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.421013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.421391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.421419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.421804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.421833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.422082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.422115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.422508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.422537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.422921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.422951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.423320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.423351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.423717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.423747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.423989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.424018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.424354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.424384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.424616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.424651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.424902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.424931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.425161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.425192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.425588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.425617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.426010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.426042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.426407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.426435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.426774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.426804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.427158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.427189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.427443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.427470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.427843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.427873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.428250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.428280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.428657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.428687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.429061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.429091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.511 [2024-10-08 17:50:04.429202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.511 [2024-10-08 17:50:04.429233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.511 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.429662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.429768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.430232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.430337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.430784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.430822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.431072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.431106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.431376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.431411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.431649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.431678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.431893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.431923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.432372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.432404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.432660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.432689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.433081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.433112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.433347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.433380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.433742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.433773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.434016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.434048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.434223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.434253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.434505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.434534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.434931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.434961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.435118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.435147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.435394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.435425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.435765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.435797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.436148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.436178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.436495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.436526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.436905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.436934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.437289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.437319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.437695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.437726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.438114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.438146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.438525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.438553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.438780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.438816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.439049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.439080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.439438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.439467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.439899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.439929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.440160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.440191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.440557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.440587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.440953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.440991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.441394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.441424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.441801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.441830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.442187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.512 [2024-10-08 17:50:04.442219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.512 qpair failed and we were unable to recover it. 00:34:12.512 [2024-10-08 17:50:04.442596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.442628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.443043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.443075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.443458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.443487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.443860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.443889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.444108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.444138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.444406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.444435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.444665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.444695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.444904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.444933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.445196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.445230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.445480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.445509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.445882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.445910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.446174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.446207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.446440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.446472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.446849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.446878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.447110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.447140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.447509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.447540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.447793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.447825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.448087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.448118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.448351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.448380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.448597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.448626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.448859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.448889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.449303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.449333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.449541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.449569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.449928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.449957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.450263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.450293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.450395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.450426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.450682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.450711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.451100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.451132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.451511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.451543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.451914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.451944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.452253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.452289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.452652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.452682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.452941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.452969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.453380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.453409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.453498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.453525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.453843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.453874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.454239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.454270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.454534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.513 [2024-10-08 17:50:04.454563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.513 qpair failed and we were unable to recover it. 00:34:12.513 [2024-10-08 17:50:04.454821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.454850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.455114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.455144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.455518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.455547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.455928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.455957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.456165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.456197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.456325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.456353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.456762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.456791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.457151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.457182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.457424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.457453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.457819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.457849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.458221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.458253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.458616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.458647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.458848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.458877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.459023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.459056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.459332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.459361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.459692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.459721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.460086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.460115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.460352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.460381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.460748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.460778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.461129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.461162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.461395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.461425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.461788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.461817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.462078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.462107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.462467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.462496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.462751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.462781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.463188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.463219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.463464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.463493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.463836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.463864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.464109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.464138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.464502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.464532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.464749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.464778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.465138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.465168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.465534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.465570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.465949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.465984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.466441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.466470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.466851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.466881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.467252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.467282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.467648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.467677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.468045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.468076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.468177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.514 [2024-10-08 17:50:04.468209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.514 qpair failed and we were unable to recover it. 00:34:12.514 [2024-10-08 17:50:04.468428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.468456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.468870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.468901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.469264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.469295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.469524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.469555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.469690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.469721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.470100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.470131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.470523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.470553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.470777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.470806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.471223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.471254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.471599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.471630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.515 [2024-10-08 17:50:04.472014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.515 [2024-10-08 17:50:04.472045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.515 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.472390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.472423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.472799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.472828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.473176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.473206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.473424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.473453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.473655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.473685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.473931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.473959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.474330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.474360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.474729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.474760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.475000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.475031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.475412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.475442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.475793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.475824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.476194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.476224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.476655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.476684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.477034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.477065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.477414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.477443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.477783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.477812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.478165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.478195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.478550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.478581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.478944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.478994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.479238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.479268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.479360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.479387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.479779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.479815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.480049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.480079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.480449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.480477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.480844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.480874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.481228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.481259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.481664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.481693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.481939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.481967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.482422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.482452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.482709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.482739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.482855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.482886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.483236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.483268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.483482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.483510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.483602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.792 [2024-10-08 17:50:04.483629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.792 qpair failed and we were unable to recover it. 00:34:12.792 [2024-10-08 17:50:04.484182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.484287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.484611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.484649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.484904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.484940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.485334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.485367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.485711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.485741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.486036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.486068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.486451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.486479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.486708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.486736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.487116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.487150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.487246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.487275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.487647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.487677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.488093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.488124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.488385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.488414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.488658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.488685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.489054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.489085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.489460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.489489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.489819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.489849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.490230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.490259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.490498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.490526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.490988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.491018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.491409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.491438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.491654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.491682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.491815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.491842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.492401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.492511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.492961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.493023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.493528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.493636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.494254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.494361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.494654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.494706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.495250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.495359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.495811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.495848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.496067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.496099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.496543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.496574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.496941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.496971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.497438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.497841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.497872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.498234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.498266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.498473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.498503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.498878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.498909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.499205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.499236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.499741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.793 [2024-10-08 17:50:04.499779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.793 qpair failed and we were unable to recover it. 00:34:12.793 [2024-10-08 17:50:04.500159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.500195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.500467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.500498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.500593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.500622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.501159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.501252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.501677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.501710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.502111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.502146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.502526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.502557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.502901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.502934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.503326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.503360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.503716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.503745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.504284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.504386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.504747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.504785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.505034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.505067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.505506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.505538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.505767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.505816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.506081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.506118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.506411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.506441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.506679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.506708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.507086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.507116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.507212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.507239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.507592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.507622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.507871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.507901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.508245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.508275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.508454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.508483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.508716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.508744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.509100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.509129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.509468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.509498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.509857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.509886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.510104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.510135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.510523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.510552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.510939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.510969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.511387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.511419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.511797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.511829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.512098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.512128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.512350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.512379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.512778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.512809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.513156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.513186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.513565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.513595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.513961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.514017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.794 qpair failed and we were unable to recover it. 00:34:12.794 [2024-10-08 17:50:04.514436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.794 [2024-10-08 17:50:04.514466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.514817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.514847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.515230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.515263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.515699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.515728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.515940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.515969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.516227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.516257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.516460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.516489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.516786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.516820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.517169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.517200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.517357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.517385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.517756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.517785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.518136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.518166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.518549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.518578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.518957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.518998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.519425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.519455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.519682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.519710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.520090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.520120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.520509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.520538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.520926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.520957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.521317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.521347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.521705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.521735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.521951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.521991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.522227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.522349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.522479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.522718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.522868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.522999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.523028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.523257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.523287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.523703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.523734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.524113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.524143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.524403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.524433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.524692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.524720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.525089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.525119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.525463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.525493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.525719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.525748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.525849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.525878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.526176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.526208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.526437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.526467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.526839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.795 [2024-10-08 17:50:04.526869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.795 qpair failed and we were unable to recover it. 00:34:12.795 [2024-10-08 17:50:04.527098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.527129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.527525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.527555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.527924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.527961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.528335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.528367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.528584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.528615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.528991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.529021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.529408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.529438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.529539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.529569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.530158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.530265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.530574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.530610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.530998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.531032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.531399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.531429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.531808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.531837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.532241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.532347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.532809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.532847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.533247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.533281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.533543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.533575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.533938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.533969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.534229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.534264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.534670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.534700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.535058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.535088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.535454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.535483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.535710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.535740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.536087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.536116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.536504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.536534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.536787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.536817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.536910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.536938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.537206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.537237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.537631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.537660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.537897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.537926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.538044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.538073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.538467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.538497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.538741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.796 [2024-10-08 17:50:04.538771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.796 qpair failed and we were unable to recover it. 00:34:12.796 [2024-10-08 17:50:04.539110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.539140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.539382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.539411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.539655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.539685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.540058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.540090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.540456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.540486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.540731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.540760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.541104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.541134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.541592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.541622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.541993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.542022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.542208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.542247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.542639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.542669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.542921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.542950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.543326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.543356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.543664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.543692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.543953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.543994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.544352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.544381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.544728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.544758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.544981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.545010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.545223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.545254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.545467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.545497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.545851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.545881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.546116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.546148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.546603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.546633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.546962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.547002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.547133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.547161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.547520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.547550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.547705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.547734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.548103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.548134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.548510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.548540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.548909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.548941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.549291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.549323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.549681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.549712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.550074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.550104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.550374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.550404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.550765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.550795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.551175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.551206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.551568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.551600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.551971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.552012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.552239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.797 [2024-10-08 17:50:04.552269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.797 qpair failed and we were unable to recover it. 00:34:12.797 [2024-10-08 17:50:04.552499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.552528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.552721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.552749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.552958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.552996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.553373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.553402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.553772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.553802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.554034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.554066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.554455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.554485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.554711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.554744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.555003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.555037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.555390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.555419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.555675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.555713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.555946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.555989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.556341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.556372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.556731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.556762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.557100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.557133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.557516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.557547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.557758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.557789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.558153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.558184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.558563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.558593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.558809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.558838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.559094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.559126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.559229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.559257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.559545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.559574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.559800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.559828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.560209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.560240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.560648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.560678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.560921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.560953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.561206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.561237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.561440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.561468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.561916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.561946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.562313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.562345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.562553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.562582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.562740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.562770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.563133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.563165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.563513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.563543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.563773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.563802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.563965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.564003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.564411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.564441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.564781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.564811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.798 [2024-10-08 17:50:04.564935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.798 [2024-10-08 17:50:04.564964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.798 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.565268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.565300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.565667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.565698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.566054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.566084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.566319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.566349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.566763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.566793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.567162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.567194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.567577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.567608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.567981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.568012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.568390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.568419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.568588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.568617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.569004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.569042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.569231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.569260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.569512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.569551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.569880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.569910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.570286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.570318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.570418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.570447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.571073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.571178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.571640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.571679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.572267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.572370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.572831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.572869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.573325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.573429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.573671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.573704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.574067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.574098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.574484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.574513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.574898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.574927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.575080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.575109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.575466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.575496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.575869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.575899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.576115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.576145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.576368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.576396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.576765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.576794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.577181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.577210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.577579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.577608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.577999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.578032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.578447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.578477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.578824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.578853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.579240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.579272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.579645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.579675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.580051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.580081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.580301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.799 [2024-10-08 17:50:04.580331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.799 qpair failed and we were unable to recover it. 00:34:12.799 [2024-10-08 17:50:04.580698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.580729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.580858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.580887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.581278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.581309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.581683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.581711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.582062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.582094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.582208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.582239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.582701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.582792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.583252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.583356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.583693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.583735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.584004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.584039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.584277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.584322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.584548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.584581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.584945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.584986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.585157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.585190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.585563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.585594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.585847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.585878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.586120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.586152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.586512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.586542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.586920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.586950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.587227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.587258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.587464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.587496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.587906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.587938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.588200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.588233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.588620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.588651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.588903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.588935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.589124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.589155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.589400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.589431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.589821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.589852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.590211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.590244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.590585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.590616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.590860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.590892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.591229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.591260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.591609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.591638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.591849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.591880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.592232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.592264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.800 [2024-10-08 17:50:04.592484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.800 [2024-10-08 17:50:04.592514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.800 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.592917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.592948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.593336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.593367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.593743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.593774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.594130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.594161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.594543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.594574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.594937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.594966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.595337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.595367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.595748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.595776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.596152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.596181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.596573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.596603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.596700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.596727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.597050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.597089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.597473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.597501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.597734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.597762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.598119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.598157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.598504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.598533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.598904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.598934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.599164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.599196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.599367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.599395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.599771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.599801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.600064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.600095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.600443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.600471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.600862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.600891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.601270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.601299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.601688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.601717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.602070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.602099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.602494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.602522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.602889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.602919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.603372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.603402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.603654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.603682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.603889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.603918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.604174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.604204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.604417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.604445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.604859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.604888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.605284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.605315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.605680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.605716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.606098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.606128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.606487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.606517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.606771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.606799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.801 [2024-10-08 17:50:04.607154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.801 [2024-10-08 17:50:04.607190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.801 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.607423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.607452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.607834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.607865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.608106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.608137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.608480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.608510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.608793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.608824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.609031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.609061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.609279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.609308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.609566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.609596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.609811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.609844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.610111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.610143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.610389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.610421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.610769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.610807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.611148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.611179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.611557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.611586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.611958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.612004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.612256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.612285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.612502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.612531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.612778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.612807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.613151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.613181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.613633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.613662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.613872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.613900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.614153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.614184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.614415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.614445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.614800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.614829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.615091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.615122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.615442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.615471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.615855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.615884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.616237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.616268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.616412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.616442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.617077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.617186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.617604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.617642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.617868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.617899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.618185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.618221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.618535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.618566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.618948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.619010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.619283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.619390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.619714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.619753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.620250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.620358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0550 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.620611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.620646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.802 [2024-10-08 17:50:04.621003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.802 [2024-10-08 17:50:04.621034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.802 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.621435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.621464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.621576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.621613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.621848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.621877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.622304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.622335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.622713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.622744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.623110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.623140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.623232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.623261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.623630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.623659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.624029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.624060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.624444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.624474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.624838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.624867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.625103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.625135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.625415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.625447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.625797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.625827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.626272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.626302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.626750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.626780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.627149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.627179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.627424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.627456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.627838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.627867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.628249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.628279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.628515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.628544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.628749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.628778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.629007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.629039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.629416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.629445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.629581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.629610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.629853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.629882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.630211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.630241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.630466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.630497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.630902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.630932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.631172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.631205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.631448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.631477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.631809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.631838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.632203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.632234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.632488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.632519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.632725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.632754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.632969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.633006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.633399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.633429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.633784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.633813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 [2024-10-08 17:50:04.634183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.803 [2024-10-08 17:50:04.634214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.803 qpair failed and we were unable to recover it. 00:34:12.803 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:12.803 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:12.803 [2024-10-08 17:50:04.634641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.634673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:12.804 [2024-10-08 17:50:04.635017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.635049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.804 [2024-10-08 17:50:04.635263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.635294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.804 [2024-10-08 17:50:04.635524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.635554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.635929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.635959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.636356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.636385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.636613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.636645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.636992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.637024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.637168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.637196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.637563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.637591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.637942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.637971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.638347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.638377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.638747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.638778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.639109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.639147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.639495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.639524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.639619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.639646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa48000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.640245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.640348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.640797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.640834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.641280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.641314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.641674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.641703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.642097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.642127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.642462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.642492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.642745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.642775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.643041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.643070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.643403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.643432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.643803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.643834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.644185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.644215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.644440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.644471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.644638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.644667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.644919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.644950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.645201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.645235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.645588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.645619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.646087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.646118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.646332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.646361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.646584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.646617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.646847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.646877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.647299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.647329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.647689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.647718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.647959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.804 [2024-10-08 17:50:04.648005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.804 qpair failed and we were unable to recover it. 00:34:12.804 [2024-10-08 17:50:04.648357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.648388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.648767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.648799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.649189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.649221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.649607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.649638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.649885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.649913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.650065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.650098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.650498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.650527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.650774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.650802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.651175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.651205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.651416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.651445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.651663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.651698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.652087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.652118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.652508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.652538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.652892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.652920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.653263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.653293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.653638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.653669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.653961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.653997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.654246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.654275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.654514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.654543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.654743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.654772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.654993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.655023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.655397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.655427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.655878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.655907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.656278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.656311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.656668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.656697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.657027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.657060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.657315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.657343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.657702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.657731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.658101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.658132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.658493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.658523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.658742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.658771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.658858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.658886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.659283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.659313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.659679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.659710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.805 [2024-10-08 17:50:04.660176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.805 [2024-10-08 17:50:04.660207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.805 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.660554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.660585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.660953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.660994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.661349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.661379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.661743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.661772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.662128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.662165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.662534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.662563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.662922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.662957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.663328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.663361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.663740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.663769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.664137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.664169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.664540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.664570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.664825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.664854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.665095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.665126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.665523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.665553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.665818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.665848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.666184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.666214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.666571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.666601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.666971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.667009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.667380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.667409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.667767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.667797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.668153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.668184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.668528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.668558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.668809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.668841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.669048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.669079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.669453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.669481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.669702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.669731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.670108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.670138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.670350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.670379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.670607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.670636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.670994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.671024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.671323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.671354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.806 [2024-10-08 17:50:04.671727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.806 [2024-10-08 17:50:04.671758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.806 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.672138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.672169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.672538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.672568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.672816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.672844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.673198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.673229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.673602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.673633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.674003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.674032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.674130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.674160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.674417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.674447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.674797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.674828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.675051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.675081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.675489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.675519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.675887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.675915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.676154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.676184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.676553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.676583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.676956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.677001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.677280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.677309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.807 [2024-10-08 17:50:04.677521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.677553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.807 [2024-10-08 17:50:04.677911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.677942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.807 [2024-10-08 17:50:04.678315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.678347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.678446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.678474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.678570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.678598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.678953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.678993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.679271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.679300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.679541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.679569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.680010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.680040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.680401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.680430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.680803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.680833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.681243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.681274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.681623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.681658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.682006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.682037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.682404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.682434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.682805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.682834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.807 qpair failed and we were unable to recover it. 00:34:12.807 [2024-10-08 17:50:04.683220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.807 [2024-10-08 17:50:04.683249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.683628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.683656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.684043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.684072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.684410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.684439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.684734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.684762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.685121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.685149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.685379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.685406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.685662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.685690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.685899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.685930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.686301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.686331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.686560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.686589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.686942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.686971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.687260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.687294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.687653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.687682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.687937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.687966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.688207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.688237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.688616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.688644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.689022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.689053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.689420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.689449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.689828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.689856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.690207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.690244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.690619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.690648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.691007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.691039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.691436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.691466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.691839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.691870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.692077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.692107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.692483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.692513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.692775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.692808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.693159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.693191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.693569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.693599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.693971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.694019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.694301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.694331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.694637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.694666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.695030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.695060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.695424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.695454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.695585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.695615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.695965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.808 [2024-10-08 17:50:04.696005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.808 qpair failed and we were unable to recover it. 00:34:12.808 [2024-10-08 17:50:04.696135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.696165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.696284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.696313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.696666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.696696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.696935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.696965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.697098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.697127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.697416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.697446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.697673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.697702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.697927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.697957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.698389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.698418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.698766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.698794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.699022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.699053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.699285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.699314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.699690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.699719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.699968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.700004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.700352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.700381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.700611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.700639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.700885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.700914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.701158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.701186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.701540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.701569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.701968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.702005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.702445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.702475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 Malloc0 00:34:12.809 [2024-10-08 17:50:04.702790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.702818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.703240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.703270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.703378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.703416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.809 [2024-10-08 17:50:04.703799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.703830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:12.809 [2024-10-08 17:50:04.704066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.704099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.809 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 [2024-10-08 17:50:04.704507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.704537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.704774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.704802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.704935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.704962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.705430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.705536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.706010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.706049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.706462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.706569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.707259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.707364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.707819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.707857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.708375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.809 [2024-10-08 17:50:04.708480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa50000b90 with addr=10.0.0.2, port=4420 00:34:12.809 qpair failed and we were unable to recover it. 00:34:12.809 [2024-10-08 17:50:04.708884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.708919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.709329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.709360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.709719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.709747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.709860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.810 [2024-10-08 17:50:04.710034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.710084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.710363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.710392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.710752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.710781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.710904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.710935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.711330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.711362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.711721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.711749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.712118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.712148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.712530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.712559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.712777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.712806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.713164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.713194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.713575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.713604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.714011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.714041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.714282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.714311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.714565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.714593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.714816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.714847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.715080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.715110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.715373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.715403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.715645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.715673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.715920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.715949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.716362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.716390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.716618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.716647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.717000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.717029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.717440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.717469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.717847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.717888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.718225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.718255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.718507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.718535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 [2024-10-08 17:50:04.718949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.810 [2024-10-08 17:50:04.719005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.810 qpair failed and we were unable to recover it. 00:34:12.810 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.811 [2024-10-08 17:50:04.719376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.719406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.811 [2024-10-08 17:50:04.719663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.719691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.811 [2024-10-08 17:50:04.720057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.811 [2024-10-08 17:50:04.720088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.720609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.720639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.721009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.721038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.721302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.721330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.721703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.721732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.722119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.722150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.722516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.722545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.722778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.722806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.723047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.723078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.723455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.723484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.723857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.723886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.724275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.724303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.724701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.724730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.724990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.725020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.725362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.725390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.725742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.725771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.725895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.725927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.726319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.726349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.726606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.726635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.726877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.726908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.727148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.727178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.727559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.727588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.727823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.727852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.728135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.728165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.728534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.728564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.728954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.728990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.729323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.729351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.729719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.729749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.730008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.730039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.730146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.730172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.811 [2024-10-08 17:50:04.730275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.811 [2024-10-08 17:50:04.730306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.811 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.730706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.730735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.730966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.731013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.731391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.812 [2024-10-08 17:50:04.731420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.812 [2024-10-08 17:50:04.731782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.731810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.812 [2024-10-08 17:50:04.732147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.732179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.732543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.732573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.732930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.732959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.733368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.733397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.733770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.733799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.734172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.734202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.734550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.734579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.734945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.734983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.735223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.735254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.735644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.735674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.735896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.735924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.736182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.736212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.736587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.736616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.736839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.736868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.737215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.737246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.737611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.737640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.738011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.738040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.738295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.738324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.738577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.738606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.738958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.738996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.739410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.739439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.739663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.739691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.740058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.740095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.740361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.740390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.740663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.740692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.741047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.741076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.741322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.741351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.741756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.741785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.741889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.741918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.742266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.742296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.812 [2024-10-08 17:50:04.742519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.812 [2024-10-08 17:50:04.742546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.812 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.742810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.742839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.813 [2024-10-08 17:50:04.743293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.743322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.813 [2024-10-08 17:50:04.743694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.743722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.813 [2024-10-08 17:50:04.744109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.744139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.744510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.744539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.744775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.744802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.744915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.744944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.745230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.745260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.745608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.745637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.745900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.745929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.746180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.746214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.746594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.746623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.746992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.747022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.747408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.747438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.747807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.747836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.748210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.748241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.748613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.748642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.749021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.749050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.749421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.749449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.749810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.813 [2024-10-08 17:50:04.749839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa44000b90 with addr=10.0.0.2, port=4420 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 [2024-10-08 17:50:04.750632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.813 [2024-10-08 17:50:04.751666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.813 [2024-10-08 17:50:04.751825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.813 [2024-10-08 17:50:04.751878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.813 [2024-10-08 17:50:04.751901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.813 [2024-10-08 17:50:04.751922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:12.813 [2024-10-08 17:50:04.752006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.813 [2024-10-08 17:50:04.761356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.813 [2024-10-08 17:50:04.761459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.813 [2024-10-08 17:50:04.761497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.813 [2024-10-08 17:50:04.761516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.813 [2024-10-08 17:50:04.761533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:12.813 [2024-10-08 17:50:04.761572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:12.813 qpair failed and we were unable to recover it. 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.813 17:50:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 563207 00:34:13.075 [2024-10-08 17:50:04.771374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.075 [2024-10-08 17:50:04.771464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.075 [2024-10-08 17:50:04.771493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.075 [2024-10-08 17:50:04.771507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.075 [2024-10-08 17:50:04.771519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.075 [2024-10-08 17:50:04.771549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.075 qpair failed and we were unable to recover it. 00:34:13.075 [2024-10-08 17:50:04.781325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.075 [2024-10-08 17:50:04.781437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.075 [2024-10-08 17:50:04.781459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.075 [2024-10-08 17:50:04.781469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.075 [2024-10-08 17:50:04.781478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.075 [2024-10-08 17:50:04.781500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.075 qpair failed and we were unable to recover it. 00:34:13.075 [2024-10-08 17:50:04.791343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.075 [2024-10-08 17:50:04.791421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.791437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.791444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.791451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.791466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.801336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.801400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.801417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.801425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.801431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.801447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.811345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.811416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.811433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.811446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.811453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.811468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.821394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.821468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.821486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.821493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.821500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.821517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.831490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.831568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.831585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.831593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.831599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.831616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.841476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.841543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.841560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.841568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.841575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.841592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.851492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.851561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.851579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.851587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.851593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.851609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.861502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.861573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.861589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.861597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.861603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.861620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.871580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.871664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.871680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.871687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.871694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.871710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.881435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.881509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.881526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.881533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.881539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.881555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.891455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.891516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.891533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.891540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.891547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.891562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.901492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.901557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.901574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.901589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.901596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.901611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.911557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.911632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.911648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.911655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.911661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.911677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.076 [2024-10-08 17:50:04.921585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.076 [2024-10-08 17:50:04.921647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.076 [2024-10-08 17:50:04.921664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.076 [2024-10-08 17:50:04.921671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.076 [2024-10-08 17:50:04.921677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.076 [2024-10-08 17:50:04.921693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.076 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.931606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.931687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.931707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.931716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.931725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.931743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.941774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.941842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.941861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.941868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.941874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.941890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.951829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.951899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.951917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.951924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.951930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.951946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.961824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.961885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.961901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.961908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.961915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.961930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.971834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.971929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.971948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.971956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.971965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.971989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.981901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.981970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.981994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.982001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.982007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.982024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:04.991989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:04.992090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:04.992111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:04.992118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:04.992125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:04.992140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.001913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.001969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.001990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.001998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.002004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.002019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.012169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.012243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.012259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.012267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.012273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.012289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.021982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.022054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.022071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.022078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.022085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.022100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.032120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.032221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.032237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.032244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.032251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.032271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.042100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.042169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.042185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.042193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.042199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.042214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.051956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.052016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.052033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.052040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.052047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.052062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.077 [2024-10-08 17:50:05.062135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.077 [2024-10-08 17:50:05.062200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.077 [2024-10-08 17:50:05.062215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.077 [2024-10-08 17:50:05.062223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.077 [2024-10-08 17:50:05.062229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.077 [2024-10-08 17:50:05.062244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.077 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.072187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.072277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.072296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.072303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.072313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.072329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.082163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.082231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.082254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.082261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.082267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.082283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.092192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.092306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.092323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.092331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.092338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.092353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.102248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.102316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.102333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.102340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.102346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.102361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.112340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.112404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.112419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.112427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.112433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.112448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.122271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.122329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.122346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.122353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.122360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.122380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.132389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.132488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.132504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.132511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.132518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.132533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.142397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.142473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.142489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.142496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.142502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.142517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.152343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.152438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.152456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.152464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.152470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.152491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.162408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.162482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.162499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.162506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.162512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.162528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.172408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.172472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.172493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.172501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.172507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.172523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.182353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.182421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.182439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.182449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.182455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.182484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.192545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.192624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.340 [2024-10-08 17:50:05.192640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.340 [2024-10-08 17:50:05.192648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.340 [2024-10-08 17:50:05.192654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.340 [2024-10-08 17:50:05.192670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.340 qpair failed and we were unable to recover it. 00:34:13.340 [2024-10-08 17:50:05.202561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.340 [2024-10-08 17:50:05.202629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.202646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.202654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.202661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.202676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.212582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.212643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.212661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.212668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.212680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.212696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.222655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.222759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.222776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.222785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.222791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.222808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.232640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.232707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.232723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.232730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.232736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.232752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.242625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.242688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.242706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.242713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.242721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.242737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.252675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.252741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.252759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.252766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.252773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.252788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.262730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.262803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.262819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.262826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.262833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.262848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.272804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.272866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.272883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.272890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.272896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.272911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.282772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.282879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.282895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.282903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.282909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.282924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.292829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.292901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.292918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.292926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.292932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.292948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.302830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.302895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.302912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.302919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.302930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.302946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.312913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.312990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.313007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.313014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.313021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.313037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.341 [2024-10-08 17:50:05.322923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.341 [2024-10-08 17:50:05.322996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.341 [2024-10-08 17:50:05.323013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.341 [2024-10-08 17:50:05.323020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.341 [2024-10-08 17:50:05.323026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.341 [2024-10-08 17:50:05.323041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.341 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.333010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.333072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.333089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.333096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.333103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.333119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.343028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.343121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.343138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.343145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.343151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.343166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.353024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.353107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.353124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.353132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.353138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.353154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.362944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.363013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.363030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.363038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.363044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.363059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.372983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.373042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.373058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.373065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.373071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.373087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.383097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.383177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.383193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.383200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.383207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.383222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.393168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.393235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.393251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.393263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.393269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.393285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.403179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.403248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.403265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.403272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.403278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.403294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.413256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.413355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.413371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.413379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.413385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.413400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.423250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.423312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.423328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.423335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.423341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.423356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.433289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.433356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.433373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.433381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.433388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.433404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.443269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.443359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.443376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.443383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.443389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.443406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.453296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.453356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.453372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.453380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.453386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.604 [2024-10-08 17:50:05.453401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.604 qpair failed and we were unable to recover it. 00:34:13.604 [2024-10-08 17:50:05.463389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.604 [2024-10-08 17:50:05.463466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.604 [2024-10-08 17:50:05.463482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.604 [2024-10-08 17:50:05.463489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.604 [2024-10-08 17:50:05.463495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.463510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.473299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.473364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.473380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.473387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.473393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.473409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.483400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.483472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.483493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.483500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.483506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.483521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.493459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.493523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.493540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.493547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.493553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.493569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.503512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.503584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.503600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.503607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.503614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.503629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.513562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.513626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.513642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.513650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.513656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.513671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.523548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.523611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.523628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.523635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.523641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.523657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.533573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.533637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.533653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.533661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.533667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.533682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.543612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.543685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.543700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.543708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.543714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.543729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.553662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.553770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.553786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.553793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.553800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.553816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.563662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.563724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.563740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.563747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.563753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.563768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.573675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.573747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.573769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.573776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.573782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.573798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.583726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.583791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.583807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.583814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.583821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.583836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.605 [2024-10-08 17:50:05.593785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.605 [2024-10-08 17:50:05.593850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.605 [2024-10-08 17:50:05.593867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.605 [2024-10-08 17:50:05.593874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.605 [2024-10-08 17:50:05.593880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.605 [2024-10-08 17:50:05.593896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.605 qpair failed and we were unable to recover it. 00:34:13.868 [2024-10-08 17:50:05.603785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.868 [2024-10-08 17:50:05.603843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.868 [2024-10-08 17:50:05.603860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.868 [2024-10-08 17:50:05.603868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.868 [2024-10-08 17:50:05.603874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.868 [2024-10-08 17:50:05.603890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.868 qpair failed and we were unable to recover it. 00:34:13.868 [2024-10-08 17:50:05.613694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.868 [2024-10-08 17:50:05.613759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.613775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.613782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.613789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.613816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.623872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.623938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.623955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.623963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.623970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.623993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.633925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.634007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.634024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.634031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.634038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.634054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.643913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.643968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.643991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.644000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.644007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.644023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.653925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.653990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.654007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.654015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.654022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.654037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.664002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.664069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.664091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.664098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.664105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.664121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.674043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.674119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.674136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.674143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.674150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.674165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.683921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.683992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.684009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.684016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.684022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.684038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.694062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.694128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.694144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.694151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.694157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.694173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.704008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.704073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.704092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.704099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.704110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.704132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.714221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.714322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.714340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.714347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.714354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.714369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.724151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.724211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.724229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.724237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.724244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.724259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.734099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.734161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.734179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.734187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.734193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.734214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.744128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.744194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.869 [2024-10-08 17:50:05.744212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.869 [2024-10-08 17:50:05.744219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.869 [2024-10-08 17:50:05.744225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.869 [2024-10-08 17:50:05.744247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.869 qpair failed and we were unable to recover it. 00:34:13.869 [2024-10-08 17:50:05.754313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.869 [2024-10-08 17:50:05.754381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.754398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.754405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.754412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.754428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.764293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.764379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.764395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.764403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.764409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.764424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.774326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.774390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.774406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.774414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.774420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.774435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.784366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.784432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.784448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.784455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.784461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.784476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.794445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.794516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.794533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.794540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.794551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.794566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.804315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.804380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.804396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.804403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.804410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.804425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.814455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.814519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.814536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.814543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.814549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.814565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.824506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.824574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.824590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.824597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.824604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.824620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.834551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.834620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.834636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.834644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.834650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.834666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.844564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.844619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.844637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.844644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.844651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.844666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:13.870 [2024-10-08 17:50:05.854473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.870 [2024-10-08 17:50:05.854541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.870 [2024-10-08 17:50:05.854558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.870 [2024-10-08 17:50:05.854565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.870 [2024-10-08 17:50:05.854572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:13.870 [2024-10-08 17:50:05.854587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:13.870 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.864515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.864581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.864598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.864605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.864611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.864627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.874564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.874642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.874659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.874666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.874672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.874688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.884665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.884731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.884750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.884762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.884769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.884784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.894562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.894626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.894643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.894650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.894656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.894672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.904727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.904797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.904813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.904820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.904827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.904842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.914813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.914882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.914902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.914909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.914916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.914933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.924792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.924856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.924873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.924881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.924887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.924903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.934813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.934879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.934896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.934903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.934910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.934926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.944876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.944941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.944957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.944965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.944971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.944994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.954915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.954987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.955004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.955011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.955018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.955034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.964870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.964934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.964951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.964958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.964964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.964986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.974938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.975006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.975023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.975034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.975041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.133 [2024-10-08 17:50:05.975057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-10-08 17:50:05.984994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.133 [2024-10-08 17:50:05.985104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.133 [2024-10-08 17:50:05.985121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.133 [2024-10-08 17:50:05.985128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.133 [2024-10-08 17:50:05.985135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:05.985150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:05.995026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:05.995094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:05.995110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:05.995118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:05.995124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:05.995140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.005017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.005069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.005086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.005094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.005100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.005116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.014939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.015011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.015027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.015035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.015041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.015057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.025102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.025170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.025187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.025194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.025200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.025216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.035049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.035117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.035134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.035141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.035147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.035163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.045030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.045094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.045110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.045117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.045124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.045139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.055058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.055124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.055140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.055148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.055154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.055170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.065150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.065221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.065241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.065249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.065255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.065270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.075294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.075370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.075386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.075393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.075399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.075414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.085295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.085405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.085421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.085428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.085434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.085449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.095283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.095386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.095402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.095409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.095416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.095432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.105386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.105454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.105469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.105477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.105483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.105503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-10-08 17:50:06.115387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.134 [2024-10-08 17:50:06.115487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.134 [2024-10-08 17:50:06.115504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.134 [2024-10-08 17:50:06.115511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.134 [2024-10-08 17:50:06.115517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.134 [2024-10-08 17:50:06.115533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.125384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.125457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.125474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.125482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.125488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.125504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.135441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.135506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.135523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.135530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.135537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.135554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.145433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.145512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.145528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.145535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.145542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.145558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.155513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.155581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.155603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.155611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.155617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.155633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.165539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.165646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.165663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.165671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.165677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.165693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.175452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.175521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.175537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.175545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.175551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.175566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.185591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.185661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.185677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.185684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.185691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.185708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.195522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.195599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.195615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.195622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.195634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.195650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.205652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.205711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.205727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.205734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.205740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.205756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.215682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.215761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.215796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.215805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.215813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.215835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.225735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.225831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.225852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.225860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.225866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.225884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.235735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.397 [2024-10-08 17:50:06.235809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.397 [2024-10-08 17:50:06.235827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.397 [2024-10-08 17:50:06.235834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.397 [2024-10-08 17:50:06.235841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.397 [2024-10-08 17:50:06.235857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.397 qpair failed and we were unable to recover it. 00:34:14.397 [2024-10-08 17:50:06.245762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.245844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.245863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.245870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.245876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.245893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.255803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.255864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.255882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.255889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.255896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.255912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.265846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.265914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.265930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.265938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.265944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.265960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.275884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.275950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.275966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.275979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.275987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.276003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.285793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.285889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.285906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.285913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.285926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.285942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.295781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.295851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.295869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.295877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.295883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.295906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.305818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.305884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.305901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.305909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.305915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.305937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.316005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.316071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.316088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.316096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.316103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.316120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.326017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.326078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.326095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.326102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.326109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.326126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.336064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.336138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.336155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.336163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.336170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.336186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.346099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.346166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.346182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.346190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.346196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.346212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.356155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.356229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.356246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.356253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.356259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.356275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.366175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.366232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.366249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.366256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.366262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.366279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.376171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.376237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.398 [2024-10-08 17:50:06.376253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.398 [2024-10-08 17:50:06.376266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.398 [2024-10-08 17:50:06.376272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.398 [2024-10-08 17:50:06.376288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.398 qpair failed and we were unable to recover it. 00:34:14.398 [2024-10-08 17:50:06.386276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.398 [2024-10-08 17:50:06.386368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.399 [2024-10-08 17:50:06.386385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.399 [2024-10-08 17:50:06.386392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.399 [2024-10-08 17:50:06.386398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.399 [2024-10-08 17:50:06.386414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.399 qpair failed and we were unable to recover it. 00:34:14.660 [2024-10-08 17:50:06.396314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.660 [2024-10-08 17:50:06.396423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.660 [2024-10-08 17:50:06.396440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.660 [2024-10-08 17:50:06.396447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.660 [2024-10-08 17:50:06.396453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.660 [2024-10-08 17:50:06.396470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.660 qpair failed and we were unable to recover it. 00:34:14.660 [2024-10-08 17:50:06.406286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.660 [2024-10-08 17:50:06.406344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.660 [2024-10-08 17:50:06.406360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.660 [2024-10-08 17:50:06.406368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.660 [2024-10-08 17:50:06.406374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.660 [2024-10-08 17:50:06.406390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.660 qpair failed and we were unable to recover it. 00:34:14.660 [2024-10-08 17:50:06.416274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.660 [2024-10-08 17:50:06.416335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.416350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.416358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.416364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.416380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.426210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.426276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.426293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.426300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.426307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.426322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.436384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.436461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.436477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.436485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.436491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.436507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.446231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.446292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.446311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.446319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.446325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.446342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.456282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.456340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.456359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.456366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.456372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.456389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.466462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.466535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.466553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.466565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.466572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.466588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.476491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.476550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.476566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.476574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.476580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.476595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.486428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.486494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.486509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.486516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.486522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.486538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.496508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.496573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.496588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.496595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.496601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.496616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.506510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.506569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.506584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.506591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.506597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.506611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.516610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.516674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.516688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.516695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.516701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.516716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.526557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.526607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.526622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.526628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.526635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.526649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.536587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.536645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.536659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.536666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.536672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.536686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.546655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.546719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.546732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.661 [2024-10-08 17:50:06.546739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.661 [2024-10-08 17:50:06.546746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.661 [2024-10-08 17:50:06.546760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.661 qpair failed and we were unable to recover it. 00:34:14.661 [2024-10-08 17:50:06.556694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.661 [2024-10-08 17:50:06.556761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.661 [2024-10-08 17:50:06.556793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.556802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.556809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.556829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.566521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.566571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.566588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.566595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.566601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.566617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.576560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.576607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.576621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.576629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.576635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.576649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.586764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.586822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.586835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.586842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.586848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.586862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.596648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.596696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.596710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.596716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.596723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.596741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.606798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.606890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.606904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.606911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.606918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.606932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.616801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.616861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.616874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.616882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.616888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.616902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.626869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.626924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.626939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.626949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.626956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.626971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.636858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.636908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.636921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.636928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.636935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.636949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.662 [2024-10-08 17:50:06.646893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.662 [2024-10-08 17:50:06.646945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.662 [2024-10-08 17:50:06.646961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.662 [2024-10-08 17:50:06.646968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.662 [2024-10-08 17:50:06.646978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.662 [2024-10-08 17:50:06.646992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.662 qpair failed and we were unable to recover it. 00:34:14.923 [2024-10-08 17:50:06.656915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.923 [2024-10-08 17:50:06.656966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.923 [2024-10-08 17:50:06.656984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.656991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.656997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.657011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.666970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.667031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.667044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.667051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.667057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.667071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.676986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.677035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.677057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.677064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.677070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.677098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.686967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.687019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.687032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.687039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.687045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.687063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.697007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.697051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.697064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.697071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.697077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.697091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.707077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.707131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.707144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.707150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.707157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.707170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.717116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.717197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.717210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.717217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.717223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.717237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.727074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.727119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.727133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.727140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.727146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.727160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.737127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.737179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.737195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.737202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.737208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.737222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.747217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.747270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.747283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.747290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.747296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.747309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.757207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.757259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.757272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.757279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.757286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.757302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.767247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.767314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.767327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.767334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.767340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.767354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.777272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.777345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.777358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.777365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.777375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.777389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.787314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.787369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.787382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.924 [2024-10-08 17:50:06.787389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.924 [2024-10-08 17:50:06.787395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.924 [2024-10-08 17:50:06.787409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.924 qpair failed and we were unable to recover it. 00:34:14.924 [2024-10-08 17:50:06.797317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.924 [2024-10-08 17:50:06.797368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.924 [2024-10-08 17:50:06.797381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.797388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.797394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.797408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.807346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.807397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.807410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.807417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.807423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.807436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.817363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.817446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.817459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.817466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.817472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.817486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.827467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.827525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.827538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.827544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.827550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.827564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.837467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.837552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.837565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.837572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.837579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.837592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.847493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.847544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.847556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.847563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.847570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.847583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.857450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.857498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.857511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.857518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.857524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.857537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.867416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.867473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.867487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.867494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.867503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.867517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.877528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.877579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.877592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.877599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.877605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.877619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.887561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.887610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.887623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.887630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.887636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.887650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.897581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.897631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.897644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.897651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.897657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.897671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:14.925 [2024-10-08 17:50:06.907644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.925 [2024-10-08 17:50:06.907698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.925 [2024-10-08 17:50:06.907711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.925 [2024-10-08 17:50:06.907718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.925 [2024-10-08 17:50:06.907724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:14.925 [2024-10-08 17:50:06.907738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:14.925 qpair failed and we were unable to recover it. 00:34:15.187 [2024-10-08 17:50:06.917649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.187 [2024-10-08 17:50:06.917696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.187 [2024-10-08 17:50:06.917710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.187 [2024-10-08 17:50:06.917716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.187 [2024-10-08 17:50:06.917723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.187 [2024-10-08 17:50:06.917736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.187 qpair failed and we were unable to recover it. 00:34:15.187 [2024-10-08 17:50:06.927640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.187 [2024-10-08 17:50:06.927692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.187 [2024-10-08 17:50:06.927716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.187 [2024-10-08 17:50:06.927725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.187 [2024-10-08 17:50:06.927732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.187 [2024-10-08 17:50:06.927751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.187 qpair failed and we were unable to recover it. 00:34:15.187 [2024-10-08 17:50:06.937694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.187 [2024-10-08 17:50:06.937741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.937756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.937763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.937770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.937785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.947762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.947819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.947833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.947840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.947846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.947860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.957766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.957822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.957835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.957847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.957853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.957867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.967773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.967826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.967840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.967848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.967855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.967869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.977720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.977774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.977787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.977794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.977800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.977814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.987886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.987936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.987950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.987956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.987962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.987979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:06.997876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:06.997925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:06.997938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:06.997945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:06.997951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:06.997965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.007852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.007898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.007912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.007919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.007925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.007938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.017925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.018013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.018028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.018036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.018042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.018057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.028012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.028089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.028103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.028110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.028116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.028130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.038006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.038056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.038069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.038076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.038082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.038096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.048019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.048064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.048080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.048087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.048093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.048107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.058012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.058065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.058078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.058085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.058091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.058105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.068043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.068097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.068110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.068117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.188 [2024-10-08 17:50:07.068123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.188 [2024-10-08 17:50:07.068137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.188 qpair failed and we were unable to recover it. 00:34:15.188 [2024-10-08 17:50:07.078116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.188 [2024-10-08 17:50:07.078218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.188 [2024-10-08 17:50:07.078231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.188 [2024-10-08 17:50:07.078238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.078244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.078258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.088065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.088112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.088125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.088132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.088138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.088159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.097998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.098045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.098058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.098065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.098071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.098085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.108187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.108238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.108251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.108258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.108264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.108278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.118198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.118253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.118266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.118273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.118279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.118292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.128219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.128261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.128274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.128281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.128287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.128301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.138227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.138324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.138341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.138347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.138354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.138367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.148313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.148364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.148377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.148384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.148390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.148403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.158308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.158359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.158371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.158378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.158384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.158398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.168318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.168365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.168377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.168384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.168390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.168404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.189 [2024-10-08 17:50:07.178389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.189 [2024-10-08 17:50:07.178465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.189 [2024-10-08 17:50:07.178478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.189 [2024-10-08 17:50:07.178485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.189 [2024-10-08 17:50:07.178491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.189 [2024-10-08 17:50:07.178509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.189 qpair failed and we were unable to recover it. 00:34:15.451 [2024-10-08 17:50:07.188423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.451 [2024-10-08 17:50:07.188474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.451 [2024-10-08 17:50:07.188487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.451 [2024-10-08 17:50:07.188494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.451 [2024-10-08 17:50:07.188500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.451 [2024-10-08 17:50:07.188514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-10-08 17:50:07.198397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.451 [2024-10-08 17:50:07.198441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.451 [2024-10-08 17:50:07.198454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.451 [2024-10-08 17:50:07.198461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.451 [2024-10-08 17:50:07.198467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.198480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.208400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.208446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.208459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.208466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.208472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.208485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.218457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.218504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.218517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.218524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.218530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.218544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.228401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.228494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.228510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.228517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.228523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.228537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.238440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.238490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.238504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.238511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.238517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.238530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.248552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.248607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.248619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.248626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.248632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.248646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.258574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.258623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.258636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.258643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.258649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.258663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.268534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.268592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.268605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.268612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.268622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.268636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.278664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.278722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.278735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.278742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.278748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.278761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.288649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.288699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.288712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.288719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.288725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.288738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.298674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.298726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.298739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.298746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.298752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.298766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.308757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.308811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.308824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.308831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.452 [2024-10-08 17:50:07.308837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.452 [2024-10-08 17:50:07.308851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-10-08 17:50:07.318742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.452 [2024-10-08 17:50:07.318843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.452 [2024-10-08 17:50:07.318856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.452 [2024-10-08 17:50:07.318863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.318870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.318883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.328726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.328773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.328786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.328793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.328799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.328813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.338767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.338813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.338826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.338833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.338839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.338853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.348865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.348941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.348954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.348960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.348967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.348983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.358734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.358784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.358797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.358803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.358813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.358827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.368757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.368807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.368820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.368827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.368833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.368847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.378875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.378922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.378935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.378942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.378948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.378962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.388836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.388892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.388905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.388912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.388918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.388932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.398970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.399024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.399037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.399043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.399049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.399063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.408981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.409034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.409047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.409054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.409060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.409074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.419011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.419058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.419071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.419078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.419084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.419099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.429078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.429130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.429143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.429150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.429156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.429170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-10-08 17:50:07.439070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.453 [2024-10-08 17:50:07.439120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.453 [2024-10-08 17:50:07.439133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.453 [2024-10-08 17:50:07.439140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.453 [2024-10-08 17:50:07.439146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.453 [2024-10-08 17:50:07.439160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.715 [2024-10-08 17:50:07.449088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.715 [2024-10-08 17:50:07.449134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.715 [2024-10-08 17:50:07.449147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.715 [2024-10-08 17:50:07.449158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.715 [2024-10-08 17:50:07.449164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.715 [2024-10-08 17:50:07.449178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-10-08 17:50:07.459026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.715 [2024-10-08 17:50:07.459079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.459092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.459099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.459105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.459119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.469189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.469242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.469255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.469262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.469268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.469282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.479194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.479284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.479297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.479304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.479311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.479324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.489202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.489250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.489263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.489273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.489279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.489294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.499219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.499271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.499284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.499291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.499297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.499311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.509305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.509358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.509371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.509378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.509384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.509397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.519258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.519311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.519324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.519330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.519337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.519350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.529306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.529352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.529365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.529373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.529381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.529395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.539327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.539375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.539388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.539398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.539404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.539418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.549318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.549414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.549427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.549434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.549441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.549454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.559275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.559326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.559339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.559346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.559352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.559366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.569417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.569468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.569480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.569487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.569493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.569507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.579440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.579488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.579503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.579510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.579516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.579534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.589545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.716 [2024-10-08 17:50:07.589625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.716 [2024-10-08 17:50:07.589638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.716 [2024-10-08 17:50:07.589645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.716 [2024-10-08 17:50:07.589651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.716 [2024-10-08 17:50:07.589665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-10-08 17:50:07.599514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.599561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.599574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.599581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.599587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.599601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.609513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.609559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.609572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.609579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.609586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.609600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.619561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.619611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.619624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.619630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.619637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.619650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.629541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.629642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.629658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.629665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.629672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.629686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.639622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.639673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.639686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.639693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.639699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.639712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.649496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.649545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.649559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.649566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.649572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.649591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.659658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.659707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.659722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.659729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.659735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.659753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.669727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.669782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.669795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.669802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.669809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.669826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.679717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.679768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.679781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.679788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.679794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.679808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.689744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.689794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.689808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.689815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.689821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.689835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-10-08 17:50:07.699769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.717 [2024-10-08 17:50:07.699826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.717 [2024-10-08 17:50:07.699839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.717 [2024-10-08 17:50:07.699846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.717 [2024-10-08 17:50:07.699852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.717 [2024-10-08 17:50:07.699866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.979 [2024-10-08 17:50:07.709842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.979 [2024-10-08 17:50:07.709897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.979 [2024-10-08 17:50:07.709910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.979 [2024-10-08 17:50:07.709917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.979 [2024-10-08 17:50:07.709923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.979 [2024-10-08 17:50:07.709937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.979 qpair failed and we were unable to recover it. 00:34:15.979 [2024-10-08 17:50:07.719846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.979 [2024-10-08 17:50:07.719898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.979 [2024-10-08 17:50:07.719914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.979 [2024-10-08 17:50:07.719921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.979 [2024-10-08 17:50:07.719927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.979 [2024-10-08 17:50:07.719940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.979 qpair failed and we were unable to recover it. 00:34:15.979 [2024-10-08 17:50:07.729846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.979 [2024-10-08 17:50:07.729892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.979 [2024-10-08 17:50:07.729905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.979 [2024-10-08 17:50:07.729912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.979 [2024-10-08 17:50:07.729918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.979 [2024-10-08 17:50:07.729932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.979 qpair failed and we were unable to recover it. 00:34:15.979 [2024-10-08 17:50:07.739911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.979 [2024-10-08 17:50:07.739954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.979 [2024-10-08 17:50:07.739967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.979 [2024-10-08 17:50:07.739978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.979 [2024-10-08 17:50:07.739984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.739998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.749930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.749986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.749999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.750006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.750012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.750026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.759819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.759865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.759878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.759885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.759894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.759908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.769851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.769900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.769914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.769921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.769927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.769941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.779960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.780011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.780025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.780032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.780038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.780052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.790034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.790086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.790099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.790106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.790112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.790126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.800079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.800166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.800179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.800186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.800192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.800206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.810065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.810140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.810154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.810161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.810167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.810181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.819969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.820021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.820033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.820040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.820046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.820060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.830124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.830177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.830190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.830196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.830203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.830216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.840212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.840261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.840274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.840281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.840287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.840301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.850183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.850229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.850241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.850248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.850261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.850275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.860205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.860252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.860265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.860272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.860278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.860291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.870248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.870308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.870321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.870327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.980 [2024-10-08 17:50:07.870334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.980 [2024-10-08 17:50:07.870347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.980 qpair failed and we were unable to recover it. 00:34:15.980 [2024-10-08 17:50:07.880256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.980 [2024-10-08 17:50:07.880303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.980 [2024-10-08 17:50:07.880316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.980 [2024-10-08 17:50:07.880323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.880329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.880342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.890186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.890242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.890254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.890261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.890268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.890281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.900319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.900364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.900377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.900384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.900390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.900404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.910380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.910433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.910447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.910454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.910461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.910475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.920249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.920297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.920310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.920316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.920323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.920336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.930255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.930305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.930318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.930325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.930331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.930344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.940419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.940467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.940480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.940490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.940497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.940511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.950452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.950508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.950521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.950528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.950534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.950548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.960501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.960548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.960562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.960568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.960575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.960588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:15.981 [2024-10-08 17:50:07.970499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.981 [2024-10-08 17:50:07.970546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.981 [2024-10-08 17:50:07.970559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.981 [2024-10-08 17:50:07.970566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.981 [2024-10-08 17:50:07.970572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:15.981 [2024-10-08 17:50:07.970585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.981 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:07.980513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:07.980575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:07.980588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:07.980594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:07.980601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:07.980614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:07.990580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:07.990635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:07.990648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:07.990655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:07.990662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:07.990675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.000582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.000635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.000648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.000655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.000661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.000675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.010622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.010669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.010683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.010689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.010696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.010710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.020664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.020723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.020748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.020756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.020763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.020782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.030704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.030765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.030789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.030802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.030810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.030829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.040737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.040789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.040804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.040812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.040818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.040833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.050573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.050622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.050636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.050642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.050650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.050664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.060596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.060647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.060661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.060669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.060675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.060693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.070682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.070735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.070749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.070756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.070762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.070776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.080812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.080859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.080872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.080880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.080886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.080900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.243 qpair failed and we were unable to recover it. 00:34:16.243 [2024-10-08 17:50:08.090809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.243 [2024-10-08 17:50:08.090864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.243 [2024-10-08 17:50:08.090878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.243 [2024-10-08 17:50:08.090884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.243 [2024-10-08 17:50:08.090891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.243 [2024-10-08 17:50:08.090904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.100709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.100765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.100778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.100784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.100791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.100804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.110907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.110964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.110985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.110992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.111001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.111016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.120907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.120957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.120977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.120984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.120990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.121005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.130928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.130978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.130992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.130999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.131005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.131020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.140953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.141006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.141019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.141026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.141032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.141046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.151005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.151060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.151073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.151080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.151086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.151100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.161030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.161088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.161101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.161108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.161115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.161132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.170917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.170962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.170981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.170988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.170994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.171015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.181069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.181120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.181133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.181140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.181146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.181160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.191121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.191179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.191192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.191199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.191205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.191219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.201104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.201154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.201167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.201173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.201179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.201193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.211144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.211233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.211249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.211256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.211262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.211276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.221214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.221307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.221320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.221326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.221332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.221346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.244 [2024-10-08 17:50:08.231232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.244 [2024-10-08 17:50:08.231312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.244 [2024-10-08 17:50:08.231325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.244 [2024-10-08 17:50:08.231332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.244 [2024-10-08 17:50:08.231338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.244 [2024-10-08 17:50:08.231352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.244 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.241248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.241299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.241311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.241318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.241325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.241338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.251257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.251306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.251319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.251326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.251332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.251349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.261286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.261330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.261343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.261350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.261356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.261370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.271369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.271420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.271433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.271439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.271446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.271459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.281380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.281472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.281484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.281491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.281497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.281511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.291240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.291290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.291303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.291310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.291316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.291329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.301414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.301508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.301525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.301532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.301538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.301552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.311347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.311413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.311426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.311433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.311439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.311452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.321484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.321532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.321545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.321551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.321557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.321571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.331459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.331506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.331519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.331526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.331532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.331546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.341485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.341530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.341543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.341550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.341559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.341573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.351604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.351656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.351669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.351676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.351682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.351696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.361597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.361651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.361664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.361671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.361677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.361691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.371603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.371662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.371675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.371682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.371688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.371701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.381634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.381683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.381696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.381702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.381708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.381722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.391671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.391743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.391756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.391763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.391769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.391783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.401682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.401734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.401746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.401753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.401759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.401773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.411724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.411778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.411793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.411799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.411805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.411821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.421745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.421791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.421804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.421811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.421817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.421831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.431810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.431863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.431876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.431886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.431892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.431906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.441697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.441747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.441760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.441767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.441773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.441787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.451828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.508 [2024-10-08 17:50:08.451874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.508 [2024-10-08 17:50:08.451886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.508 [2024-10-08 17:50:08.451893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.508 [2024-10-08 17:50:08.451900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.508 [2024-10-08 17:50:08.451913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.508 qpair failed and we were unable to recover it. 00:34:16.508 [2024-10-08 17:50:08.461843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.509 [2024-10-08 17:50:08.461907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.509 [2024-10-08 17:50:08.461920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.509 [2024-10-08 17:50:08.461927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.509 [2024-10-08 17:50:08.461933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.509 [2024-10-08 17:50:08.461947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-10-08 17:50:08.471800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.509 [2024-10-08 17:50:08.471857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.509 [2024-10-08 17:50:08.471870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.509 [2024-10-08 17:50:08.471877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.509 [2024-10-08 17:50:08.471883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.509 [2024-10-08 17:50:08.471896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-10-08 17:50:08.481804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.509 [2024-10-08 17:50:08.481852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.509 [2024-10-08 17:50:08.481865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.509 [2024-10-08 17:50:08.481872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.509 [2024-10-08 17:50:08.481878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.509 [2024-10-08 17:50:08.481892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.509 [2024-10-08 17:50:08.491947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.509 [2024-10-08 17:50:08.492038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.509 [2024-10-08 17:50:08.492051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.509 [2024-10-08 17:50:08.492058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.509 [2024-10-08 17:50:08.492064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.509 [2024-10-08 17:50:08.492078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.509 qpair failed and we were unable to recover it. 00:34:16.769 [2024-10-08 17:50:08.501964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.769 [2024-10-08 17:50:08.502015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.769 [2024-10-08 17:50:08.502029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.769 [2024-10-08 17:50:08.502035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.769 [2024-10-08 17:50:08.502042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.769 [2024-10-08 17:50:08.502056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.769 qpair failed and we were unable to recover it. 00:34:16.769 [2024-10-08 17:50:08.512044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.769 [2024-10-08 17:50:08.512098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.769 [2024-10-08 17:50:08.512111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.769 [2024-10-08 17:50:08.512119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.769 [2024-10-08 17:50:08.512125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.769 [2024-10-08 17:50:08.512139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.769 qpair failed and we were unable to recover it. 00:34:16.769 [2024-10-08 17:50:08.522038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.769 [2024-10-08 17:50:08.522087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.769 [2024-10-08 17:50:08.522100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.769 [2024-10-08 17:50:08.522111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.769 [2024-10-08 17:50:08.522117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.769 [2024-10-08 17:50:08.522131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.769 qpair failed and we were unable to recover it. 00:34:16.769 [2024-10-08 17:50:08.532053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.769 [2024-10-08 17:50:08.532097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.769 [2024-10-08 17:50:08.532110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.769 [2024-10-08 17:50:08.532117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.769 [2024-10-08 17:50:08.532124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.769 [2024-10-08 17:50:08.532137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.769 qpair failed and we were unable to recover it. 00:34:16.769 [2024-10-08 17:50:08.542086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.769 [2024-10-08 17:50:08.542174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.769 [2024-10-08 17:50:08.542187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.769 [2024-10-08 17:50:08.542194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.769 [2024-10-08 17:50:08.542200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.769 [2024-10-08 17:50:08.542214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.769 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.552152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.552207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.552220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.552227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.552233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.552247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.562147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.562196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.562209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.562216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.562222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.562236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.572202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.572287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.572300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.572307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.572313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.572327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.582048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.582096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.582109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.582115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.582122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.582135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.592268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.592321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.592334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.592341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.592347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.592361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.602260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.602305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.602318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.602325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.602331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.602345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.612316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.612401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.612420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.612427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.612433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.612447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.622265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.622350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.622363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.622370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.622376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.622389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.632374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.632448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.632461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.632468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.632474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.632488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.642350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.642419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.642432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.642439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.642445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.642458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.652378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.652435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.652448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.652455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.652461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.652479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.662409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.662497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.662510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.662517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.662523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.662536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.672478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.672530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.672543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.672550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.672556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.672570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.770 [2024-10-08 17:50:08.682485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.770 [2024-10-08 17:50:08.682538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.770 [2024-10-08 17:50:08.682551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.770 [2024-10-08 17:50:08.682558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.770 [2024-10-08 17:50:08.682564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.770 [2024-10-08 17:50:08.682577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.770 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.692357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.692404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.692417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.692424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.692430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.692444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.702520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.702570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.702586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.702593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.702599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.702613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.712594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.712647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.712661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.712667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.712674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.712687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.722597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.722646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.722659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.722666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.722672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.722685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.732505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.732556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.732571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.732577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.732584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.732603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.742637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.742682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.742695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.742702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.742708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.742725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:16.771 [2024-10-08 17:50:08.752698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.771 [2024-10-08 17:50:08.752755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.771 [2024-10-08 17:50:08.752767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.771 [2024-10-08 17:50:08.752774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.771 [2024-10-08 17:50:08.752781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:16.771 [2024-10-08 17:50:08.752794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:16.771 qpair failed and we were unable to recover it. 00:34:17.034 [2024-10-08 17:50:08.762695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.034 [2024-10-08 17:50:08.762758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.762771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.762778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.762785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.762798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.772705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.772751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.772764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.772771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.772777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.772791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.782730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.782778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.782791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.782798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.782804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.782820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.792811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.792893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.792910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.792916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.792922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.792936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.802804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.802858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.802871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.802878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.802884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.802898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.812814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.812903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.812915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.812922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.812929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.812942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.822835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.822885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.822898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.822904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.822911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.822924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.832881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.832944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.832957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.832964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.832976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.832991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.842902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.842950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.842964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.842971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.842980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.842994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.852792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.852847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.852859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.852866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.852873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.852886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.862817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.862863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.862876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.862883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.862889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.862903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.873021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.873077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.873090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.873097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.873103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.873117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.882879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.882935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.882948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.882955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.882961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.882981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.893011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.035 [2024-10-08 17:50:08.893057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.035 [2024-10-08 17:50:08.893070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.035 [2024-10-08 17:50:08.893077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.035 [2024-10-08 17:50:08.893083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.035 [2024-10-08 17:50:08.893097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-10-08 17:50:08.903055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.903104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.903117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.903124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.903130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.903144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.913086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.913148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.913161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.913168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.913174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.913188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.923111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.923171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.923184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.923191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.923201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.923215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.933127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.933173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.933186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.933193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.933199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.933213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.943149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.943192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.943205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.943212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.943218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.943232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.953217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.953273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.953287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.953294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.953300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.953315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.963210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.963288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.963301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.963308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.963314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.963328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.973099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.973150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.973163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.973170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.973176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.973190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.983269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.983319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.983332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.983339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.983345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.983359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:08.993197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:08.993256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:08.993269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:08.993276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:08.993282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:08.993295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:09.003188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:09.003240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:09.003253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:09.003260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:09.003266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:09.003279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:09.013208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:09.013257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:09.013270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:09.013281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:09.013287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:09.013301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-10-08 17:50:09.023301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.036 [2024-10-08 17:50:09.023353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.036 [2024-10-08 17:50:09.023366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.036 [2024-10-08 17:50:09.023372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.036 [2024-10-08 17:50:09.023379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.036 [2024-10-08 17:50:09.023392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.033465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.033521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.033534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.033541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.033547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.033561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.043422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.043469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.043482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.043488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.043495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.043509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.053521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.053574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.053587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.053594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.053600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.053614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.063532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.063579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.063592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.063599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.063606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.063619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.073517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.073582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.073595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.073602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.073608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.073622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.083524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.083576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.083590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.083597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.083603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.083621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.093518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.093581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.093594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.298 [2024-10-08 17:50:09.093601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.298 [2024-10-08 17:50:09.093607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.298 [2024-10-08 17:50:09.093620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.298 qpair failed and we were unable to recover it. 00:34:17.298 [2024-10-08 17:50:09.103576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.298 [2024-10-08 17:50:09.103627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.298 [2024-10-08 17:50:09.103640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.103650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.103656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.103670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.113645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.113725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.113738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.113745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.113751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.113765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.123639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.123698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.123711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.123717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.123724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.123737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.133525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.133572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.133585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.133591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.133598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.133611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.143564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.143613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.143627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.143634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.143640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.143654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.153634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.153755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.153768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.153775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.153781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.153795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.163753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.163799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.163812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.163819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.163826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.163840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.173769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.173817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.173829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.173836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.173842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.173856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.183813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.183871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.183883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.183890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.183897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.183910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.193871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.193929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.193945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.193952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.193958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.193972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.203839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.203921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.203934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.203940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.203947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.203961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.213931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.213992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.214006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.214012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.214019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.214032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.223856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.223902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.223915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.223922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.223928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.223941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.233982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.234039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.234052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.299 [2024-10-08 17:50:09.234059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.299 [2024-10-08 17:50:09.234065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.299 [2024-10-08 17:50:09.234083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-10-08 17:50:09.243969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.299 [2024-10-08 17:50:09.244021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.299 [2024-10-08 17:50:09.244034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.300 [2024-10-08 17:50:09.244041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.300 [2024-10-08 17:50:09.244047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.300 [2024-10-08 17:50:09.244061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-10-08 17:50:09.253957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.300 [2024-10-08 17:50:09.254007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.300 [2024-10-08 17:50:09.254021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.300 [2024-10-08 17:50:09.254028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.300 [2024-10-08 17:50:09.254034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.300 [2024-10-08 17:50:09.254048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-10-08 17:50:09.263892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.300 [2024-10-08 17:50:09.263991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.300 [2024-10-08 17:50:09.264005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.300 [2024-10-08 17:50:09.264012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.300 [2024-10-08 17:50:09.264019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.300 [2024-10-08 17:50:09.264033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-10-08 17:50:09.274071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.300 [2024-10-08 17:50:09.274131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.300 [2024-10-08 17:50:09.274144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.300 [2024-10-08 17:50:09.274151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.300 [2024-10-08 17:50:09.274157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.300 [2024-10-08 17:50:09.274170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-10-08 17:50:09.283956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.300 [2024-10-08 17:50:09.284011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.300 [2024-10-08 17:50:09.284028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.300 [2024-10-08 17:50:09.284035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.300 [2024-10-08 17:50:09.284041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.300 [2024-10-08 17:50:09.284061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.294065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.294115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.294128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.294135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.294141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.294155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.304103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.304148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.304161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.304168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.304174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.304188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.314181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.314231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.314244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.314251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.314257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.314270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.324216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.324314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.324327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.324334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.324348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.324362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.334170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.334220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.334233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.334240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.334246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.334260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.344264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.344342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.344355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.344362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.561 [2024-10-08 17:50:09.344368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.561 [2024-10-08 17:50:09.344382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.561 qpair failed and we were unable to recover it. 00:34:17.561 [2024-10-08 17:50:09.354308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.561 [2024-10-08 17:50:09.354363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.561 [2024-10-08 17:50:09.354376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.561 [2024-10-08 17:50:09.354382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.354388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.354402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.364164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.364211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.364224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.364231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.364237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.364251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.374305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.374364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.374377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.374384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.374390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.374404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.384346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.384394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.384406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.384413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.384419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.384433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.394280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.394351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.394364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.394370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.394376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.394390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.404397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.404445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.404458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.404464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.404471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.404485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.414282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.414330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.414344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.414351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.414360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.414374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.424442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.424492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.424506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.424513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.424519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.424533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.434504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.434597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.434610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.434617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.434623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.434637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.444473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.444523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.444536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.444543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.444549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.444563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.454509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.454559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.454572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.454578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.454585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.454598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.464531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.464580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.464594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.464601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.464607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.464621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.474613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.474667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.474680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.474687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.474693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.474707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.484620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.484668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.484681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.484688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.562 [2024-10-08 17:50:09.484694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.562 [2024-10-08 17:50:09.484708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.562 qpair failed and we were unable to recover it. 00:34:17.562 [2024-10-08 17:50:09.494582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.562 [2024-10-08 17:50:09.494630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.562 [2024-10-08 17:50:09.494643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.562 [2024-10-08 17:50:09.494650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.494656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.494670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.563 [2024-10-08 17:50:09.504634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.563 [2024-10-08 17:50:09.504682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.563 [2024-10-08 17:50:09.504695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.563 [2024-10-08 17:50:09.504705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.504712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.504725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.563 [2024-10-08 17:50:09.514716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.563 [2024-10-08 17:50:09.514770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.563 [2024-10-08 17:50:09.514783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.563 [2024-10-08 17:50:09.514789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.514796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.514809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.563 [2024-10-08 17:50:09.524712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.563 [2024-10-08 17:50:09.524764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.563 [2024-10-08 17:50:09.524777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.563 [2024-10-08 17:50:09.524784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.524791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.524805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.563 [2024-10-08 17:50:09.534598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.563 [2024-10-08 17:50:09.534641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.563 [2024-10-08 17:50:09.534654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.563 [2024-10-08 17:50:09.534661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.534667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.534682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.563 [2024-10-08 17:50:09.544816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.563 [2024-10-08 17:50:09.544896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.563 [2024-10-08 17:50:09.544909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.563 [2024-10-08 17:50:09.544916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.563 [2024-10-08 17:50:09.544922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.563 [2024-10-08 17:50:09.544936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.563 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.554814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.554868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.554881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.554888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.554894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.554908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.564822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.564875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.564888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.564895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.564901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.564915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.574837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.574883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.574896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.574904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.574910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.574925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.584822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.584874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.584887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.584895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.584901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.584915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.594946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.595015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.595029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.595039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.595045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.595059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.604900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.604945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.604958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.604965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.604972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.604990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.614963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.615060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.615073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.615080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.615087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.615100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.624992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.625039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.625052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.625059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.625065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.625079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.635064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.635119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.635132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.635139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.635145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.635159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.645040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.645128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.645141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.645148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.645154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.645168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.655048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.655092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.655105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.655112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.655118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.655132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.825 qpair failed and we were unable to recover it. 00:34:17.825 [2024-10-08 17:50:09.664956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.825 [2024-10-08 17:50:09.665006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.825 [2024-10-08 17:50:09.665020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.825 [2024-10-08 17:50:09.665027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.825 [2024-10-08 17:50:09.665034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.825 [2024-10-08 17:50:09.665048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.675155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.675209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.675223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.675229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.675236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.675250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.685149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.685194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.685210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.685217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.685223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.685237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.695171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.695261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.695275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.695282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.695288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.695302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.705189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.705283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.705296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.705303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.705309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.705323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.715141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.715203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.715216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.715223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.715230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.715243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.725273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.725319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.725332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.725339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.725345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.725362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.735282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.735329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.735342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.735348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.735355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.735369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.745353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.745425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.745437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.745444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.745451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.745464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.755389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.755444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.755457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.755464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.755470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.755485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.765360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.765413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.765426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.765433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.765439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.765453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.775368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.775421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.775437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.775444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.775450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.775464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.785467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.785515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.785528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.785534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.785540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.785554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.795483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.795550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.795562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.795569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.795575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.795589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.826 [2024-10-08 17:50:09.805496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.826 [2024-10-08 17:50:09.805572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.826 [2024-10-08 17:50:09.805584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.826 [2024-10-08 17:50:09.805591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.826 [2024-10-08 17:50:09.805597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.826 [2024-10-08 17:50:09.805611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.826 qpair failed and we were unable to recover it. 00:34:17.827 [2024-10-08 17:50:09.815392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.827 [2024-10-08 17:50:09.815450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.827 [2024-10-08 17:50:09.815463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.827 [2024-10-08 17:50:09.815470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.827 [2024-10-08 17:50:09.815476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:17.827 [2024-10-08 17:50:09.815499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:17.827 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.825518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.825565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.825579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.825585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.825592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.825605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.835581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.835634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.835647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.835653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.835659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.835673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.845571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.845623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.845637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.845644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.845650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.845663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.855649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.855704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.855717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.855724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.855730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.855744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.865543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.865595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.865609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.865616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.865623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.865641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.875580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.875643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.875656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.875663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.875669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.875683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.885701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.089 [2024-10-08 17:50:09.885763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.089 [2024-10-08 17:50:09.885776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.089 [2024-10-08 17:50:09.885783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.089 [2024-10-08 17:50:09.885789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.089 [2024-10-08 17:50:09.885803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.089 qpair failed and we were unable to recover it. 00:34:18.089 [2024-10-08 17:50:09.895709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.895758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.895771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.895778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.895784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.895798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.905723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.905774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.905787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.905794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.905804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.905818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.915787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.915839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.915852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.915859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.915865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.915879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.925699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.925752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.925765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.925772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.925778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.925792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.935822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.935877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.935889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.935896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.935903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.935917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.945844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.945893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.945906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.945913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.945919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.945933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.955971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.956076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.956090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.956097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.956103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.956117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.965911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.965958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.965971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.965983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.965989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.966003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.975931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.975981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.975994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.976001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.976007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.976021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.985818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.985862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.985874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.985881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.985887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.985901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:09.995899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:09.995953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:09.995966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:09.995980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:09.995987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:09.996001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:10.006053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:10.006102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:10.006117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:10.006124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:10.006130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:10.006144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:10.016010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:10.016058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:10.016071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:10.016078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:10.016085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:10.016098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.090 [2024-10-08 17:50:10.026030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.090 [2024-10-08 17:50:10.026078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.090 [2024-10-08 17:50:10.026090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.090 [2024-10-08 17:50:10.026097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.090 [2024-10-08 17:50:10.026104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.090 [2024-10-08 17:50:10.026117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.090 qpair failed and we were unable to recover it. 00:34:18.091 [2024-10-08 17:50:10.036056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.091 [2024-10-08 17:50:10.036112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.091 [2024-10-08 17:50:10.036127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.091 [2024-10-08 17:50:10.036134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.091 [2024-10-08 17:50:10.036140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.091 [2024-10-08 17:50:10.036155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.091 qpair failed and we were unable to recover it. 00:34:18.091 [2024-10-08 17:50:10.046137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.091 [2024-10-08 17:50:10.046186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.091 [2024-10-08 17:50:10.046201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.091 [2024-10-08 17:50:10.046208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.091 [2024-10-08 17:50:10.046215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa44000b90 00:34:18.091 [2024-10-08 17:50:10.046229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.091 qpair failed and we were unable to recover it. 00:34:18.091 [2024-10-08 17:50:10.046388] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:18.091 A controller has encountered a failure and is being reset. 00:34:18.091 Controller properly reset. 00:34:18.351 Initializing NVMe Controllers 00:34:18.351 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:18.351 Initialization complete. Launching workers. 00:34:18.351 Starting thread on core 1 00:34:18.351 Starting thread on core 2 00:34:18.351 Starting thread on core 3 00:34:18.351 Starting thread on core 0 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:18.351 00:34:18.351 real 0m11.398s 00:34:18.351 user 0m21.762s 00:34:18.351 sys 0m3.906s 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.351 ************************************ 00:34:18.351 END TEST nvmf_target_disconnect_tc2 00:34:18.351 ************************************ 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.351 rmmod nvme_tcp 00:34:18.351 rmmod nvme_fabrics 00:34:18.351 rmmod nvme_keyring 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 563882 ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 563882 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 563882 ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 563882 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 563882 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 563882' 00:34:18.351 killing process with pid 563882 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 563882 00:34:18.351 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 563882 00:34:18.612 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:18.612 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:18.612 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:18.612 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:34:18.612 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.613 17:50:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.526 17:50:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.526 00:34:20.526 real 0m21.977s 00:34:20.526 user 0m49.377s 00:34:20.526 sys 0m10.236s 00:34:20.526 17:50:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.526 17:50:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:20.526 ************************************ 00:34:20.526 END TEST nvmf_target_disconnect 00:34:20.526 ************************************ 00:34:20.787 17:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:20.787 00:34:20.787 real 6m37.514s 00:34:20.787 user 11m26.236s 00:34:20.787 sys 2m17.322s 00:34:20.787 17:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.787 17:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.787 ************************************ 00:34:20.787 END TEST nvmf_host 00:34:20.787 ************************************ 00:34:20.787 17:50:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:20.787 17:50:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:20.787 17:50:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:20.787 17:50:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:20.787 17:50:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:20.787 17:50:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.787 ************************************ 00:34:20.787 START TEST nvmf_target_core_interrupt_mode 00:34:20.787 ************************************ 00:34:20.787 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:20.787 * Looking for test storage... 00:34:20.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:20.787 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:20.787 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:34:20.787 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.048 --rc genhtml_branch_coverage=1 00:34:21.048 --rc genhtml_function_coverage=1 00:34:21.048 --rc genhtml_legend=1 00:34:21.048 --rc geninfo_all_blocks=1 00:34:21.048 --rc geninfo_unexecuted_blocks=1 00:34:21.048 00:34:21.048 ' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.048 --rc genhtml_branch_coverage=1 00:34:21.048 --rc genhtml_function_coverage=1 00:34:21.048 --rc genhtml_legend=1 00:34:21.048 --rc geninfo_all_blocks=1 00:34:21.048 --rc geninfo_unexecuted_blocks=1 00:34:21.048 00:34:21.048 ' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.048 --rc genhtml_branch_coverage=1 00:34:21.048 --rc genhtml_function_coverage=1 00:34:21.048 --rc genhtml_legend=1 00:34:21.048 --rc geninfo_all_blocks=1 00:34:21.048 --rc geninfo_unexecuted_blocks=1 00:34:21.048 00:34:21.048 ' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:21.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.048 --rc genhtml_branch_coverage=1 00:34:21.048 --rc genhtml_function_coverage=1 00:34:21.048 --rc genhtml_legend=1 00:34:21.048 --rc geninfo_all_blocks=1 00:34:21.048 --rc geninfo_unexecuted_blocks=1 00:34:21.048 00:34:21.048 ' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.048 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.049 ************************************ 00:34:21.049 START TEST nvmf_abort 00:34:21.049 ************************************ 00:34:21.049 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:21.049 * Looking for test storage... 00:34:21.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.049 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:21.049 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:34:21.049 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:21.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.311 --rc genhtml_branch_coverage=1 00:34:21.311 --rc genhtml_function_coverage=1 00:34:21.311 --rc genhtml_legend=1 00:34:21.311 --rc geninfo_all_blocks=1 00:34:21.311 --rc geninfo_unexecuted_blocks=1 00:34:21.311 00:34:21.311 ' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:21.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.311 --rc genhtml_branch_coverage=1 00:34:21.311 --rc genhtml_function_coverage=1 00:34:21.311 --rc genhtml_legend=1 00:34:21.311 --rc geninfo_all_blocks=1 00:34:21.311 --rc geninfo_unexecuted_blocks=1 00:34:21.311 00:34:21.311 ' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:21.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.311 --rc genhtml_branch_coverage=1 00:34:21.311 --rc genhtml_function_coverage=1 00:34:21.311 --rc genhtml_legend=1 00:34:21.311 --rc geninfo_all_blocks=1 00:34:21.311 --rc geninfo_unexecuted_blocks=1 00:34:21.311 00:34:21.311 ' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:21.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.311 --rc genhtml_branch_coverage=1 00:34:21.311 --rc genhtml_function_coverage=1 00:34:21.311 --rc genhtml_legend=1 00:34:21.311 --rc geninfo_all_blocks=1 00:34:21.311 --rc geninfo_unexecuted_blocks=1 00:34:21.311 00:34:21.311 ' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.311 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:34:21.312 17:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.452 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:29.453 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:29.453 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:29.453 Found net devices under 0000:31:00.0: cvl_0_0 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:29.453 Found net devices under 0000:31:00.1: cvl_0_1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:29.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:34:29.453 00:34:29.453 --- 10.0.0.2 ping statistics --- 00:34:29.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.453 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:34:29.453 00:34:29.453 --- 10.0.0.1 ping statistics --- 00:34:29.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.453 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:34:29.453 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=569670 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 569670 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 569670 ']' 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:29.454 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.454 [2024-10-08 17:50:20.886509] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:29.454 [2024-10-08 17:50:20.887689] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:34:29.454 [2024-10-08 17:50:20.887737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.454 [2024-10-08 17:50:20.979047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:29.454 [2024-10-08 17:50:21.071943] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.454 [2024-10-08 17:50:21.072013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.454 [2024-10-08 17:50:21.072022] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.454 [2024-10-08 17:50:21.072029] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.454 [2024-10-08 17:50:21.072036] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.454 [2024-10-08 17:50:21.073369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:29.454 [2024-10-08 17:50:21.073526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.454 [2024-10-08 17:50:21.073527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:29.454 [2024-10-08 17:50:21.157897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:29.454 [2024-10-08 17:50:21.158076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:29.454 [2024-10-08 17:50:21.158549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:29.454 [2024-10-08 17:50:21.158620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:29.715 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:29.715 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:34:29.715 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:29.715 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.715 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.975 [2024-10-08 17:50:21.746535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.975 Malloc0 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.975 Delay0 00:34:29.975 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.976 [2024-10-08 17:50:21.834374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.976 17:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:34:30.237 [2024-10-08 17:50:22.004181] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:32.149 Initializing NVMe Controllers 00:34:32.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:32.149 controller IO queue size 128 less than required 00:34:32.149 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:32.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:32.149 Initialization complete. Launching workers. 00:34:32.149 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28541 00:34:32.149 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28598, failed to submit 66 00:34:32.149 success 28541, unsuccessful 57, failed 0 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.149 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:32.150 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.150 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.150 rmmod nvme_tcp 00:34:32.150 rmmod nvme_fabrics 00:34:32.150 rmmod nvme_keyring 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 569670 ']' 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 569670 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 569670 ']' 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 569670 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 569670 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 569670' 00:34:32.410 killing process with pid 569670 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 569670 00:34:32.410 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 569670 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.675 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.589 00:34:34.589 real 0m13.605s 00:34:34.589 user 0m11.259s 00:34:34.589 sys 0m6.978s 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:34.589 ************************************ 00:34:34.589 END TEST nvmf_abort 00:34:34.589 ************************************ 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.589 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.850 ************************************ 00:34:34.850 START TEST nvmf_ns_hotplug_stress 00:34:34.850 ************************************ 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:34.850 * Looking for test storage... 00:34:34.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.850 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.851 --rc genhtml_branch_coverage=1 00:34:34.851 --rc genhtml_function_coverage=1 00:34:34.851 --rc genhtml_legend=1 00:34:34.851 --rc geninfo_all_blocks=1 00:34:34.851 --rc geninfo_unexecuted_blocks=1 00:34:34.851 00:34:34.851 ' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.851 --rc genhtml_branch_coverage=1 00:34:34.851 --rc genhtml_function_coverage=1 00:34:34.851 --rc genhtml_legend=1 00:34:34.851 --rc geninfo_all_blocks=1 00:34:34.851 --rc geninfo_unexecuted_blocks=1 00:34:34.851 00:34:34.851 ' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.851 --rc genhtml_branch_coverage=1 00:34:34.851 --rc genhtml_function_coverage=1 00:34:34.851 --rc genhtml_legend=1 00:34:34.851 --rc geninfo_all_blocks=1 00:34:34.851 --rc geninfo_unexecuted_blocks=1 00:34:34.851 00:34:34.851 ' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:34.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.851 --rc genhtml_branch_coverage=1 00:34:34.851 --rc genhtml_function_coverage=1 00:34:34.851 --rc genhtml_legend=1 00:34:34.851 --rc geninfo_all_blocks=1 00:34:34.851 --rc geninfo_unexecuted_blocks=1 00:34:34.851 00:34:34.851 ' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.851 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.112 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.112 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:35.112 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:35.112 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.112 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.254 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:43.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:43.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:43.255 Found net devices under 0000:31:00.0: cvl_0_0 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:43.255 Found net devices under 0000:31:00.1: cvl_0_1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:34:43.255 00:34:43.255 --- 10.0.0.2 ping statistics --- 00:34:43.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.255 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:34:43.255 00:34:43.255 --- 10.0.0.1 ping statistics --- 00:34:43.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.255 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=574447 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 574447 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 574447 ']' 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.255 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:43.255 [2024-10-08 17:50:34.539131] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.255 [2024-10-08 17:50:34.540289] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:34:43.256 [2024-10-08 17:50:34.540341] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.256 [2024-10-08 17:50:34.630819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:43.256 [2024-10-08 17:50:34.724744] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.256 [2024-10-08 17:50:34.724805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.256 [2024-10-08 17:50:34.724814] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.256 [2024-10-08 17:50:34.724827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.256 [2024-10-08 17:50:34.724834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.256 [2024-10-08 17:50:34.726175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.256 [2024-10-08 17:50:34.726468] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.256 [2024-10-08 17:50:34.726470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.256 [2024-10-08 17:50:34.816447] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.256 [2024-10-08 17:50:34.817419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.256 [2024-10-08 17:50:34.817423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.256 [2024-10-08 17:50:34.817694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.516 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.516 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:34:43.516 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:43.516 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.517 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:43.517 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.517 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:43.517 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:43.777 [2024-10-08 17:50:35.563547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.777 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:44.038 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.038 [2024-10-08 17:50:35.956270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.038 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.298 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:44.559 Malloc0 00:34:44.559 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:44.559 Delay0 00:34:44.559 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.820 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:45.080 NULL1 00:34:45.080 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:45.341 17:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:45.341 17:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=575042 00:34:45.341 17:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:45.341 17:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:46.283 Read completed with error (sct=0, sc=11) 00:34:46.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:46.544 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:46.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:46.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:46.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:46.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:46.544 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:46.544 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:46.804 true 00:34:46.804 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:46.804 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.744 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:47.744 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:47.744 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:48.005 true 00:34:48.005 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:48.005 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.266 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.266 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:48.266 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:48.526 true 00:34:48.526 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:48.526 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:49.909 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:49.909 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:49.909 true 00:34:50.169 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:50.169 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.110 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.110 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:51.110 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:51.110 true 00:34:51.370 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:51.370 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.370 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.630 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:51.630 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:51.889 true 00:34:51.889 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:51.889 17:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.088 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:53.088 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:53.348 true 00:34:53.348 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:53.348 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:54.286 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:54.286 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:54.286 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:54.545 true 00:34:54.545 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:54.545 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.804 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:54.804 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:54.804 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:55.064 true 00:34:55.064 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:55.064 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.447 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:56.447 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:56.707 true 00:34:56.707 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:56.707 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:57.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:57.648 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:57.648 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:57.648 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:57.909 true 00:34:57.909 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:57.909 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:57.909 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.169 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:58.169 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:58.429 true 00:34:58.429 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:58.429 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.429 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.689 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:58.689 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:58.949 true 00:34:58.949 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:58.949 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.210 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:59.210 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:59.210 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:59.471 true 00:34:59.471 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:34:59.471 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.855 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:00.855 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:00.855 true 00:35:01.116 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:01.116 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.057 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:02.057 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:02.058 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:02.058 true 00:35:02.318 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:02.318 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.318 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:02.578 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:02.578 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:02.839 true 00:35:02.839 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:02.839 17:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.042 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:04.043 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:04.320 true 00:35:04.320 17:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:04.320 17:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:05.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:05.325 17:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:05.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:05.325 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:05.325 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:05.599 true 00:35:05.599 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:05.599 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:05.599 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:05.885 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:35:05.885 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:35:06.171 true 00:35:06.171 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:06.171 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:07.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.152 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:07.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.446 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:35:07.446 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:35:07.749 true 00:35:07.749 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:07.749 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:08.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.402 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:08.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.697 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:35:08.697 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:35:08.697 true 00:35:08.697 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:08.697 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:08.975 17:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:09.285 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:35:09.285 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:35:09.285 true 00:35:09.285 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:09.285 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:09.581 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:09.903 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:35:09.903 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:35:09.903 true 00:35:09.903 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:09.903 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:10.164 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:10.164 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:35:10.164 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:35:10.425 true 00:35:10.425 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:10.425 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 17:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:11.811 17:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:35:11.811 17:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:35:11.811 true 00:35:11.811 17:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:11.811 17:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:12.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:12.752 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:12.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:13.013 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:35:13.013 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:35:13.013 true 00:35:13.013 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:13.013 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:13.273 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:13.533 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:35:13.534 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:35:13.534 true 00:35:13.534 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:13.534 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:13.794 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:14.055 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:35:14.055 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:35:14.055 true 00:35:14.055 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:14.055 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:14.315 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:14.575 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:35:14.575 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:35:14.835 true 00:35:14.835 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:14.835 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:15.775 Initializing NVMe Controllers 00:35:15.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:15.775 Controller IO queue size 128, less than required. 00:35:15.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.775 Controller IO queue size 128, less than required. 00:35:15.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:15.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:15.775 Initialization complete. Launching workers. 00:35:15.775 ======================================================== 00:35:15.775 Latency(us) 00:35:15.775 Device Information : IOPS MiB/s Average min max 00:35:15.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2242.50 1.09 36704.44 1910.57 1012721.98 00:35:15.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18504.83 9.04 6916.88 1152.53 431811.03 00:35:15.775 ======================================================== 00:35:15.775 Total : 20747.33 10.13 10136.50 1152.53 1012721.98 00:35:15.775 00:35:15.775 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:16.034 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:35:16.034 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:35:16.295 true 00:35:16.295 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 575042 00:35:16.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (575042) - No such process 00:35:16.295 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 575042 00:35:16.295 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:16.295 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:16.556 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:35:16.556 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:35:16.556 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:35:16.556 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:16.556 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:35:16.556 null0 00:35:16.816 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:16.816 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:16.816 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:35:16.816 null1 00:35:16.816 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:16.816 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:16.817 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:35:17.076 null2 00:35:17.076 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.076 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.076 17:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:35:17.076 null3 00:35:17.076 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.076 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.076 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:35:17.336 null4 00:35:17.336 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.336 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.336 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:35:17.596 null5 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:35:17.596 null6 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.596 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:35:17.857 null7 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:35:17.857 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 581712 581714 581715 581717 581719 581721 581723 581725 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:17.858 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:18.119 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.119 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:18.380 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:18.642 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:18.904 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.165 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:19.166 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:19.166 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:19.166 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:19.166 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:19.166 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:19.426 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:19.685 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.686 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:19.946 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:19.947 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.208 17:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.208 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:20.209 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:20.475 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:20.744 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:20.745 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.008 17:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:21.269 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.270 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.539 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.540 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.805 rmmod nvme_tcp 00:35:21.805 rmmod nvme_fabrics 00:35:21.805 rmmod nvme_keyring 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 574447 ']' 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 574447 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 574447 ']' 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 574447 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:21.805 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 574447 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 574447' 00:35:22.066 killing process with pid 574447 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 574447 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 574447 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.066 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.067 17:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.612 00:35:24.612 real 0m49.429s 00:35:24.612 user 2m57.401s 00:35:24.612 sys 0m20.043s 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 ************************************ 00:35:24.612 END TEST nvmf_ns_hotplug_stress 00:35:24.612 ************************************ 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 ************************************ 00:35:24.612 START TEST nvmf_delete_subsystem 00:35:24.612 ************************************ 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:24.612 * Looking for test storage... 00:35:24.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.612 --rc genhtml_branch_coverage=1 00:35:24.612 --rc genhtml_function_coverage=1 00:35:24.612 --rc genhtml_legend=1 00:35:24.612 --rc geninfo_all_blocks=1 00:35:24.612 --rc geninfo_unexecuted_blocks=1 00:35:24.612 00:35:24.612 ' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.612 --rc genhtml_branch_coverage=1 00:35:24.612 --rc genhtml_function_coverage=1 00:35:24.612 --rc genhtml_legend=1 00:35:24.612 --rc geninfo_all_blocks=1 00:35:24.612 --rc geninfo_unexecuted_blocks=1 00:35:24.612 00:35:24.612 ' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.612 --rc genhtml_branch_coverage=1 00:35:24.612 --rc genhtml_function_coverage=1 00:35:24.612 --rc genhtml_legend=1 00:35:24.612 --rc geninfo_all_blocks=1 00:35:24.612 --rc geninfo_unexecuted_blocks=1 00:35:24.612 00:35:24.612 ' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.612 --rc genhtml_branch_coverage=1 00:35:24.612 --rc genhtml_function_coverage=1 00:35:24.612 --rc genhtml_legend=1 00:35:24.612 --rc geninfo_all_blocks=1 00:35:24.612 --rc geninfo_unexecuted_blocks=1 00:35:24.612 00:35:24.612 ' 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.612 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.613 17:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:32.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:32.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:32.759 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:32.760 Found net devices under 0000:31:00.0: cvl_0_0 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:32.760 Found net devices under 0000:31:00.1: cvl_0_1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:35:32.760 00:35:32.760 --- 10.0.0.2 ping statistics --- 00:35:32.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.760 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:35:32.760 00:35:32.760 --- 10.0.0.1 ping statistics --- 00:35:32.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.760 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.760 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=587057 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 587057 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 587057 ']' 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:32.760 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:32.760 [2024-10-08 17:51:24.059989] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:32.760 [2024-10-08 17:51:24.061159] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:35:32.760 [2024-10-08 17:51:24.061210] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.760 [2024-10-08 17:51:24.150479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:32.760 [2024-10-08 17:51:24.246101] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.761 [2024-10-08 17:51:24.246166] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.761 [2024-10-08 17:51:24.246174] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.761 [2024-10-08 17:51:24.246181] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.761 [2024-10-08 17:51:24.246188] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.761 [2024-10-08 17:51:24.247287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.761 [2024-10-08 17:51:24.247291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.761 [2024-10-08 17:51:24.323429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:32.761 [2024-10-08 17:51:24.324009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:32.761 [2024-10-08 17:51:24.324332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.021 [2024-10-08 17:51:24.924365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.021 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.022 [2024-10-08 17:51:24.968938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.022 NULL1 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.022 Delay0 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.022 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:33.022 17:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.022 17:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=587331 00:35:33.022 17:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:33.022 17:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:33.282 [2024-10-08 17:51:25.078833] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:35.194 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:35.194 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.194 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:35.455 Read completed with error (sct=0, sc=8) 00:35:35.455 Read completed with error (sct=0, sc=8) 00:35:35.455 starting I/O failed: -6 00:35:35.455 Read completed with error (sct=0, sc=8) 00:35:35.455 Write completed with error (sct=0, sc=8) 00:35:35.455 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 [2024-10-08 17:51:27.201872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68fd0 is same with the state(6) to be set 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.456 Write completed with error (sct=0, sc=8) 00:35:35.456 starting I/O failed: -6 00:35:35.456 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Write completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 starting I/O failed: -6 00:35:35.457 Read completed with error (sct=0, sc=8) 00:35:35.457 [2024-10-08 17:51:27.205281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f744800cfe0 is same with the state(6) to be set 00:35:36.398 [2024-10-08 17:51:28.177889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a6b0 is same with the state(6) to be set 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Write completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Read completed with error (sct=0, sc=8) 00:35:36.398 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 [2024-10-08 17:51:28.206802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd696c0 is same with the state(6) to be set 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 [2024-10-08 17:51:28.207088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd691b0 is same with the state(6) to be set 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 [2024-10-08 17:51:28.207370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7448000c00 is same with the state(6) to be set 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Write completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 Read completed with error (sct=0, sc=8) 00:35:36.399 [2024-10-08 17:51:28.207643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f744800d310 is same with the state(6) to be set 00:35:36.399 Initializing NVMe Controllers 00:35:36.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:36.399 Controller IO queue size 128, less than required. 00:35:36.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:36.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:36.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:36.399 Initialization complete. Launching workers. 00:35:36.399 ======================================================== 00:35:36.399 Latency(us) 00:35:36.399 Device Information : IOPS MiB/s Average min max 00:35:36.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.07 0.09 899843.00 499.01 1009681.44 00:35:36.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.19 0.08 949796.45 389.65 1011356.87 00:35:36.399 ======================================================== 00:35:36.399 Total : 348.26 0.17 923107.04 389.65 1011356.87 00:35:36.399 00:35:36.399 [2024-10-08 17:51:28.208195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6a6b0 (9): Bad file descriptor 00:35:36.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:36.399 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.399 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:36.399 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 587331 00:35:36.399 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 587331 00:35:36.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (587331) - No such process 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 587331 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 587331 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 587331 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:36.971 [2024-10-08 17:51:28.736859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=588069 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:36.971 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:36.971 [2024-10-08 17:51:28.821892] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:37.541 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:37.541 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:37.541 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:37.801 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:37.801 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:37.801 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:38.373 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:38.373 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:38.373 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:38.944 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:38.944 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:38.944 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:39.515 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:39.515 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:39.515 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:40.086 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:40.086 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:40.087 17:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:40.087 Initializing NVMe Controllers 00:35:40.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:40.087 Controller IO queue size 128, less than required. 00:35:40.087 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:40.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:40.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:40.087 Initialization complete. Launching workers. 00:35:40.087 ======================================================== 00:35:40.087 Latency(us) 00:35:40.087 Device Information : IOPS MiB/s Average min max 00:35:40.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002212.89 1000243.77 1005443.71 00:35:40.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004449.96 1000453.94 1042361.96 00:35:40.087 ======================================================== 00:35:40.087 Total : 256.00 0.12 1003331.42 1000243.77 1042361.96 00:35:40.087 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 588069 00:35:40.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (588069) - No such process 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 588069 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:40.350 rmmod nvme_tcp 00:35:40.350 rmmod nvme_fabrics 00:35:40.350 rmmod nvme_keyring 00:35:40.350 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 587057 ']' 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 587057 ']' 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 587057' 00:35:40.611 killing process with pid 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 587057 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:35:40.611 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.612 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.612 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.612 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.612 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.155 00:35:43.155 real 0m18.500s 00:35:43.155 user 0m26.708s 00:35:43.155 sys 0m7.422s 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:43.155 ************************************ 00:35:43.155 END TEST nvmf_delete_subsystem 00:35:43.155 ************************************ 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:43.155 ************************************ 00:35:43.155 START TEST nvmf_host_management 00:35:43.155 ************************************ 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:43.155 * Looking for test storage... 00:35:43.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:43.155 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.156 --rc genhtml_branch_coverage=1 00:35:43.156 --rc genhtml_function_coverage=1 00:35:43.156 --rc genhtml_legend=1 00:35:43.156 --rc geninfo_all_blocks=1 00:35:43.156 --rc geninfo_unexecuted_blocks=1 00:35:43.156 00:35:43.156 ' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.156 --rc genhtml_branch_coverage=1 00:35:43.156 --rc genhtml_function_coverage=1 00:35:43.156 --rc genhtml_legend=1 00:35:43.156 --rc geninfo_all_blocks=1 00:35:43.156 --rc geninfo_unexecuted_blocks=1 00:35:43.156 00:35:43.156 ' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.156 --rc genhtml_branch_coverage=1 00:35:43.156 --rc genhtml_function_coverage=1 00:35:43.156 --rc genhtml_legend=1 00:35:43.156 --rc geninfo_all_blocks=1 00:35:43.156 --rc geninfo_unexecuted_blocks=1 00:35:43.156 00:35:43.156 ' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:43.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.156 --rc genhtml_branch_coverage=1 00:35:43.156 --rc genhtml_function_coverage=1 00:35:43.156 --rc genhtml_legend=1 00:35:43.156 --rc geninfo_all_blocks=1 00:35:43.156 --rc geninfo_unexecuted_blocks=1 00:35:43.156 00:35:43.156 ' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.156 17:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.340 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:51.341 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:51.341 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:51.341 Found net devices under 0000:31:00.0: cvl_0_0 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:51.341 Found net devices under 0000:31:00.1: cvl_0_1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:35:51.341 00:35:51.341 --- 10.0.0.2 ping statistics --- 00:35:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.341 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:35:51.341 00:35:51.341 --- 10.0.0.1 ping statistics --- 00:35:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.341 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:35:51.341 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=592839 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 592839 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 592839 ']' 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.342 17:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.342 [2024-10-08 17:51:42.694478] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:51.342 [2024-10-08 17:51:42.695656] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:35:51.342 [2024-10-08 17:51:42.695710] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.342 [2024-10-08 17:51:42.785251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.342 [2024-10-08 17:51:42.881125] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.342 [2024-10-08 17:51:42.881191] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.342 [2024-10-08 17:51:42.881199] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.342 [2024-10-08 17:51:42.881206] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.342 [2024-10-08 17:51:42.881212] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.342 [2024-10-08 17:51:42.883520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.342 [2024-10-08 17:51:42.883654] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:51.342 [2024-10-08 17:51:42.883814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.342 [2024-10-08 17:51:42.883814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.342 [2024-10-08 17:51:42.984718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:51.342 [2024-10-08 17:51:42.985348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:51.342 [2024-10-08 17:51:42.985819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:51.342 [2024-10-08 17:51:42.986392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:51.342 [2024-10-08 17:51:42.986468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.603 [2024-10-08 17:51:43.556874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.603 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.864 Malloc0 00:35:51.864 [2024-10-08 17:51:43.649214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=593185 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 593185 /var/tmp/bdevperf.sock 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 593185 ']' 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:51.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:51.864 { 00:35:51.864 "params": { 00:35:51.864 "name": "Nvme$subsystem", 00:35:51.864 "trtype": "$TEST_TRANSPORT", 00:35:51.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.864 "adrfam": "ipv4", 00:35:51.864 "trsvcid": "$NVMF_PORT", 00:35:51.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.864 "hdgst": ${hdgst:-false}, 00:35:51.864 "ddgst": ${ddgst:-false} 00:35:51.864 }, 00:35:51.864 "method": "bdev_nvme_attach_controller" 00:35:51.864 } 00:35:51.864 EOF 00:35:51.864 )") 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:35:51.864 17:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:51.864 "params": { 00:35:51.864 "name": "Nvme0", 00:35:51.864 "trtype": "tcp", 00:35:51.864 "traddr": "10.0.0.2", 00:35:51.864 "adrfam": "ipv4", 00:35:51.864 "trsvcid": "4420", 00:35:51.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.865 "hdgst": false, 00:35:51.865 "ddgst": false 00:35:51.865 }, 00:35:51.865 "method": "bdev_nvme_attach_controller" 00:35:51.865 }' 00:35:51.865 [2024-10-08 17:51:43.759603] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:35:51.865 [2024-10-08 17:51:43.759679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593185 ] 00:35:51.865 [2024-10-08 17:51:43.843205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.125 [2024-10-08 17:51:43.938548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.385 Running I/O for 10 seconds... 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:52.647 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=648 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 648 -ge 100 ']' 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.911 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:52.911 [2024-10-08 17:51:44.661084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.911 [2024-10-08 17:51:44.661286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x890960 is same with the state(6) to be set 00:35:52.912 [2024-10-08 17:51:44.661532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.661986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.912 [2024-10-08 17:51:44.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.912 [2024-10-08 17:51:44.662123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.913 [2024-10-08 17:51:44.662725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.913 [2024-10-08 17:51:44.662734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036f60 is same with the state(6) to be set 00:35:52.913 [2024-10-08 17:51:44.662805] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2036f60 was disconnected and freed. reset controller. 00:35:52.913 [2024-10-08 17:51:44.664082] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:52.913 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.913 task offset: 98304 on job bdev=Nvme0n1 fails 00:35:52.913 00:35:52.914 Latency(us) 00:35:52.914 [2024-10-08T15:51:44.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.914 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:52.914 Job: Nvme0n1 ended in about 0.52 seconds with error 00:35:52.914 Verification LBA range: start 0x0 length 0x400 00:35:52.914 Nvme0n1 : 0.52 1367.51 85.47 123.44 0.00 41807.05 3290.45 37355.52 00:35:52.914 [2024-10-08T15:51:44.906Z] =================================================================================================================== 00:35:52.914 [2024-10-08T15:51:44.906Z] Total : 1367.51 85.47 123.44 0.00 41807.05 3290.45 37355.52 00:35:52.914 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:52.914 [2024-10-08 17:51:44.666360] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:52.914 [2024-10-08 17:51:44.666403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1e100 (9): Bad file descriptor 00:35:52.914 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.914 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:52.914 [2024-10-08 17:51:44.668041] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:35:52.914 [2024-10-08 17:51:44.668135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:35:52.914 [2024-10-08 17:51:44.668179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.914 [2024-10-08 17:51:44.668198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:35:52.914 [2024-10-08 17:51:44.668208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:35:52.914 [2024-10-08 17:51:44.668216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.914 [2024-10-08 17:51:44.668224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e1e100 00:35:52.914 [2024-10-08 17:51:44.668249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1e100 (9): Bad file descriptor 00:35:52.914 [2024-10-08 17:51:44.668264] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:52.914 [2024-10-08 17:51:44.668273] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:52.914 [2024-10-08 17:51:44.668284] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:52.914 [2024-10-08 17:51:44.668300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:52.914 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.914 17:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 593185 00:35:53.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (593185) - No such process 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:53.856 { 00:35:53.856 "params": { 00:35:53.856 "name": "Nvme$subsystem", 00:35:53.856 "trtype": "$TEST_TRANSPORT", 00:35:53.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.856 "adrfam": "ipv4", 00:35:53.856 "trsvcid": "$NVMF_PORT", 00:35:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.856 "hdgst": ${hdgst:-false}, 00:35:53.856 "ddgst": ${ddgst:-false} 00:35:53.856 }, 00:35:53.856 "method": "bdev_nvme_attach_controller" 00:35:53.856 } 00:35:53.856 EOF 00:35:53.856 )") 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:35:53.856 17:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:53.856 "params": { 00:35:53.856 "name": "Nvme0", 00:35:53.856 "trtype": "tcp", 00:35:53.856 "traddr": "10.0.0.2", 00:35:53.856 "adrfam": "ipv4", 00:35:53.856 "trsvcid": "4420", 00:35:53.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.856 "hdgst": false, 00:35:53.856 "ddgst": false 00:35:53.856 }, 00:35:53.856 "method": "bdev_nvme_attach_controller" 00:35:53.856 }' 00:35:53.856 [2024-10-08 17:51:45.739747] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:35:53.856 [2024-10-08 17:51:45.739804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593536 ] 00:35:53.856 [2024-10-08 17:51:45.818241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.116 [2024-10-08 17:51:45.881463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.376 Running I/O for 1 seconds... 00:35:55.316 1482.00 IOPS, 92.62 MiB/s 00:35:55.316 Latency(us) 00:35:55.316 [2024-10-08T15:51:47.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.316 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:55.316 Verification LBA range: start 0x0 length 0x400 00:35:55.316 Nvme0n1 : 1.02 1513.14 94.57 0.00 0.00 41561.21 1843.20 35826.35 00:35:55.316 [2024-10-08T15:51:47.308Z] =================================================================================================================== 00:35:55.316 [2024-10-08T15:51:47.308Z] Total : 1513.14 94.57 0.00 0.00 41561.21 1843.20 35826.35 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:55.578 rmmod nvme_tcp 00:35:55.578 rmmod nvme_fabrics 00:35:55.578 rmmod nvme_keyring 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 592839 ']' 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 592839 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 592839 ']' 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 592839 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 592839 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 592839' 00:35:55.578 killing process with pid 592839 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 592839 00:35:55.578 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 592839 00:35:55.838 [2024-10-08 17:51:47.606833] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.838 17:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.748 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.748 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:57.748 00:35:57.748 real 0m15.022s 00:35:57.748 user 0m20.203s 00:35:57.748 sys 0m7.552s 00:35:57.748 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:57.748 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:57.748 ************************************ 00:35:57.748 END TEST nvmf_host_management 00:35:57.748 ************************************ 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:58.009 ************************************ 00:35:58.009 START TEST nvmf_lvol 00:35:58.009 ************************************ 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:58.009 * Looking for test storage... 00:35:58.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:58.009 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.271 --rc genhtml_branch_coverage=1 00:35:58.271 --rc genhtml_function_coverage=1 00:35:58.271 --rc genhtml_legend=1 00:35:58.271 --rc geninfo_all_blocks=1 00:35:58.271 --rc geninfo_unexecuted_blocks=1 00:35:58.271 00:35:58.271 ' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.271 --rc genhtml_branch_coverage=1 00:35:58.271 --rc genhtml_function_coverage=1 00:35:58.271 --rc genhtml_legend=1 00:35:58.271 --rc geninfo_all_blocks=1 00:35:58.271 --rc geninfo_unexecuted_blocks=1 00:35:58.271 00:35:58.271 ' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.271 --rc genhtml_branch_coverage=1 00:35:58.271 --rc genhtml_function_coverage=1 00:35:58.271 --rc genhtml_legend=1 00:35:58.271 --rc geninfo_all_blocks=1 00:35:58.271 --rc geninfo_unexecuted_blocks=1 00:35:58.271 00:35:58.271 ' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.271 --rc genhtml_branch_coverage=1 00:35:58.271 --rc genhtml_function_coverage=1 00:35:58.271 --rc genhtml_legend=1 00:35:58.271 --rc geninfo_all_blocks=1 00:35:58.271 --rc geninfo_unexecuted_blocks=1 00:35:58.271 00:35:58.271 ' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:58.271 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:58.272 17:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.416 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:06.417 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:06.417 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:06.417 Found net devices under 0000:31:00.0: cvl_0_0 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:06.417 Found net devices under 0000:31:00.1: cvl_0_1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:36:06.417 00:36:06.417 --- 10.0.0.2 ping statistics --- 00:36:06.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.417 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:36:06.417 00:36:06.417 --- 10.0.0.1 ping statistics --- 00:36:06.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.417 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=598189 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 598189 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 598189 ']' 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:06.417 17:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:06.417 [2024-10-08 17:51:57.773406] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:06.417 [2024-10-08 17:51:57.774585] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:36:06.417 [2024-10-08 17:51:57.774635] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.417 [2024-10-08 17:51:57.866291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:06.417 [2024-10-08 17:51:57.960626] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.417 [2024-10-08 17:51:57.960686] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.417 [2024-10-08 17:51:57.960695] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.417 [2024-10-08 17:51:57.960702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.417 [2024-10-08 17:51:57.960709] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.418 [2024-10-08 17:51:57.962297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.418 [2024-10-08 17:51:57.962458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.418 [2024-10-08 17:51:57.962459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.418 [2024-10-08 17:51:58.049458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.418 [2024-10-08 17:51:58.050429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:06.418 [2024-10-08 17:51:58.050442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:06.418 [2024-10-08 17:51:58.050683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.679 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:06.940 [2024-10-08 17:51:58.795394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.940 17:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:07.200 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:07.200 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:07.462 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:07.462 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:07.462 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:07.722 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=06d3c7d0-a728-4c57-bfc7-b1c587155af0 00:36:07.723 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06d3c7d0-a728-4c57-bfc7-b1c587155af0 lvol 20 00:36:07.983 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7c2cf57a-10b1-4cd2-9615-b7ac8fc7b2c6 00:36:07.983 17:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:08.244 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c2cf57a-10b1-4cd2-9615-b7ac8fc7b2c6 00:36:08.244 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.505 [2024-10-08 17:52:00.367288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.505 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:08.767 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=598646 00:36:08.767 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:08.767 17:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:09.711 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7c2cf57a-10b1-4cd2-9615-b7ac8fc7b2c6 MY_SNAPSHOT 00:36:09.971 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9bf27682-53d5-463e-ac14-f9af31858c42 00:36:09.971 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7c2cf57a-10b1-4cd2-9615-b7ac8fc7b2c6 30 00:36:10.232 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9bf27682-53d5-463e-ac14-f9af31858c42 MY_CLONE 00:36:10.494 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=34e1e5b8-822c-4752-a9a6-36eb5594dfa3 00:36:10.494 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 34e1e5b8-822c-4752-a9a6-36eb5594dfa3 00:36:11.066 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 598646 00:36:19.208 Initializing NVMe Controllers 00:36:19.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:19.208 Controller IO queue size 128, less than required. 00:36:19.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:19.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:19.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:19.208 Initialization complete. Launching workers. 00:36:19.208 ======================================================== 00:36:19.208 Latency(us) 00:36:19.208 Device Information : IOPS MiB/s Average min max 00:36:19.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15287.50 59.72 8374.03 1808.86 90136.73 00:36:19.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15594.90 60.92 8207.66 2324.43 73129.08 00:36:19.208 ======================================================== 00:36:19.208 Total : 30882.39 120.63 8290.02 1808.86 90136.73 00:36:19.208 00:36:19.208 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.469 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c2cf57a-10b1-4cd2-9615-b7ac8fc7b2c6 00:36:19.469 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06d3c7d0-a728-4c57-bfc7-b1c587155af0 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.731 rmmod nvme_tcp 00:36:19.731 rmmod nvme_fabrics 00:36:19.731 rmmod nvme_keyring 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 598189 ']' 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 598189 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 598189 ']' 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 598189 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:19.731 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 598189 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 598189' 00:36:19.992 killing process with pid 598189 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 598189 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 598189 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.538 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.538 00:36:22.538 real 0m24.161s 00:36:22.538 user 0m56.571s 00:36:22.538 sys 0m10.735s 00:36:22.538 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.538 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:22.538 ************************************ 00:36:22.538 END TEST nvmf_lvol 00:36:22.538 ************************************ 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.538 ************************************ 00:36:22.538 START TEST nvmf_lvs_grow 00:36:22.538 ************************************ 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:22.538 * Looking for test storage... 00:36:22.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.538 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:22.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.538 --rc genhtml_branch_coverage=1 00:36:22.538 --rc genhtml_function_coverage=1 00:36:22.538 --rc genhtml_legend=1 00:36:22.538 --rc geninfo_all_blocks=1 00:36:22.538 --rc geninfo_unexecuted_blocks=1 00:36:22.539 00:36:22.539 ' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:22.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.539 --rc genhtml_branch_coverage=1 00:36:22.539 --rc genhtml_function_coverage=1 00:36:22.539 --rc genhtml_legend=1 00:36:22.539 --rc geninfo_all_blocks=1 00:36:22.539 --rc geninfo_unexecuted_blocks=1 00:36:22.539 00:36:22.539 ' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:22.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.539 --rc genhtml_branch_coverage=1 00:36:22.539 --rc genhtml_function_coverage=1 00:36:22.539 --rc genhtml_legend=1 00:36:22.539 --rc geninfo_all_blocks=1 00:36:22.539 --rc geninfo_unexecuted_blocks=1 00:36:22.539 00:36:22.539 ' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:22.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.539 --rc genhtml_branch_coverage=1 00:36:22.539 --rc genhtml_function_coverage=1 00:36:22.539 --rc genhtml_legend=1 00:36:22.539 --rc geninfo_all_blocks=1 00:36:22.539 --rc geninfo_unexecuted_blocks=1 00:36:22.539 00:36:22.539 ' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.539 17:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:30.681 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:30.681 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:30.681 Found net devices under 0000:31:00.0: cvl_0_0 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:30.681 Found net devices under 0000:31:00.1: cvl_0_1 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.681 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:30.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:36:30.682 00:36:30.682 --- 10.0.0.2 ping statistics --- 00:36:30.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.682 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:36:30.682 00:36:30.682 --- 10.0.0.1 ping statistics --- 00:36:30.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.682 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=605054 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 605054 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 605054 ']' 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:30.682 17:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.682 [2024-10-08 17:52:21.905759] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:30.682 [2024-10-08 17:52:21.906935] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:36:30.682 [2024-10-08 17:52:21.906994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.682 [2024-10-08 17:52:21.995427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.682 [2024-10-08 17:52:22.088831] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.682 [2024-10-08 17:52:22.088889] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.682 [2024-10-08 17:52:22.088898] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.682 [2024-10-08 17:52:22.088905] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.682 [2024-10-08 17:52:22.088911] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.682 [2024-10-08 17:52:22.089708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.682 [2024-10-08 17:52:22.165584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:30.682 [2024-10-08 17:52:22.165870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.943 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:30.943 [2024-10-08 17:52:22.926590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:31.205 ************************************ 00:36:31.205 START TEST lvs_grow_clean 00:36:31.205 ************************************ 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:31.205 17:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:31.205 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:31.205 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:31.465 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:31.465 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:31.465 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:31.726 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:31.726 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:31.726 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 lvol 150 00:36:31.987 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e23c17a-7cf5-4b23-a512-b109accd1e9a 00:36:31.987 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:31.987 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:31.987 [2024-10-08 17:52:23.910280] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:31.987 [2024-10-08 17:52:23.910446] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:31.987 true 00:36:31.987 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:31.987 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:32.247 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:32.247 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:32.508 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e23c17a-7cf5-4b23-a512-b109accd1e9a 00:36:32.508 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.769 [2024-10-08 17:52:24.646930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.769 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=605732 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 605732 /var/tmp/bdevperf.sock 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 605732 ']' 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:33.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.030 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.030 [2024-10-08 17:52:24.901319] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:36:33.030 [2024-10-08 17:52:24.901395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605732 ] 00:36:33.030 [2024-10-08 17:52:24.982861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.291 [2024-10-08 17:52:25.077274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.863 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.863 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:36:33.863 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:34.125 Nvme0n1 00:36:34.125 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:34.388 [ 00:36:34.388 { 00:36:34.388 "name": "Nvme0n1", 00:36:34.388 "aliases": [ 00:36:34.388 "4e23c17a-7cf5-4b23-a512-b109accd1e9a" 00:36:34.388 ], 00:36:34.388 "product_name": "NVMe disk", 00:36:34.388 "block_size": 4096, 00:36:34.388 "num_blocks": 38912, 00:36:34.388 "uuid": "4e23c17a-7cf5-4b23-a512-b109accd1e9a", 00:36:34.388 "numa_id": 0, 00:36:34.388 "assigned_rate_limits": { 00:36:34.388 "rw_ios_per_sec": 0, 00:36:34.388 "rw_mbytes_per_sec": 0, 00:36:34.388 "r_mbytes_per_sec": 0, 00:36:34.388 "w_mbytes_per_sec": 0 00:36:34.388 }, 00:36:34.388 "claimed": false, 00:36:34.388 "zoned": false, 00:36:34.388 "supported_io_types": { 00:36:34.388 "read": true, 00:36:34.388 "write": true, 00:36:34.388 "unmap": true, 00:36:34.388 "flush": true, 00:36:34.388 "reset": true, 00:36:34.388 "nvme_admin": true, 00:36:34.388 "nvme_io": true, 00:36:34.388 "nvme_io_md": false, 00:36:34.388 "write_zeroes": true, 00:36:34.388 "zcopy": false, 00:36:34.388 "get_zone_info": false, 00:36:34.388 "zone_management": false, 00:36:34.388 "zone_append": false, 00:36:34.388 "compare": true, 00:36:34.388 "compare_and_write": true, 00:36:34.388 "abort": true, 00:36:34.388 "seek_hole": false, 00:36:34.388 "seek_data": false, 00:36:34.388 "copy": true, 00:36:34.388 "nvme_iov_md": false 00:36:34.388 }, 00:36:34.388 "memory_domains": [ 00:36:34.388 { 00:36:34.388 "dma_device_id": "system", 00:36:34.388 "dma_device_type": 1 00:36:34.388 } 00:36:34.388 ], 00:36:34.388 "driver_specific": { 00:36:34.388 "nvme": [ 00:36:34.388 { 00:36:34.388 "trid": { 00:36:34.388 "trtype": "TCP", 00:36:34.388 "adrfam": "IPv4", 00:36:34.388 "traddr": "10.0.0.2", 00:36:34.388 "trsvcid": "4420", 00:36:34.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:34.388 }, 00:36:34.388 "ctrlr_data": { 00:36:34.388 "cntlid": 1, 00:36:34.388 "vendor_id": "0x8086", 00:36:34.388 "model_number": "SPDK bdev Controller", 00:36:34.388 "serial_number": "SPDK0", 00:36:34.388 "firmware_revision": "25.01", 00:36:34.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.388 "oacs": { 00:36:34.388 "security": 0, 00:36:34.388 "format": 0, 00:36:34.388 "firmware": 0, 00:36:34.388 "ns_manage": 0 00:36:34.388 }, 00:36:34.388 "multi_ctrlr": true, 00:36:34.388 "ana_reporting": false 00:36:34.388 }, 00:36:34.388 "vs": { 00:36:34.388 "nvme_version": "1.3" 00:36:34.388 }, 00:36:34.388 "ns_data": { 00:36:34.388 "id": 1, 00:36:34.388 "can_share": true 00:36:34.388 } 00:36:34.388 } 00:36:34.388 ], 00:36:34.388 "mp_policy": "active_passive" 00:36:34.388 } 00:36:34.388 } 00:36:34.388 ] 00:36:34.388 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=605878 00:36:34.388 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:34.388 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:34.388 Running I/O for 10 seconds... 00:36:35.331 Latency(us) 00:36:35.331 [2024-10-08T15:52:27.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:35.331 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:36:35.331 [2024-10-08T15:52:27.323Z] =================================================================================================================== 00:36:35.331 [2024-10-08T15:52:27.323Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:36:35.331 00:36:36.273 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:36.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:36.542 Nvme0n1 : 2.00 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:36:36.542 [2024-10-08T15:52:28.534Z] =================================================================================================================== 00:36:36.542 [2024-10-08T15:52:28.534Z] Total : 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:36:36.542 00:36:36.542 true 00:36:36.542 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:36.542 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:36.805 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:36.805 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:36.805 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 605878 00:36:37.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:37.376 Nvme0n1 : 3.00 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:36:37.376 [2024-10-08T15:52:29.368Z] =================================================================================================================== 00:36:37.376 [2024-10-08T15:52:29.368Z] Total : 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:36:37.376 00:36:38.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.315 Nvme0n1 : 4.00 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:36:38.315 [2024-10-08T15:52:30.307Z] =================================================================================================================== 00:36:38.315 [2024-10-08T15:52:30.307Z] Total : 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:36:38.315 00:36:39.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:39.701 Nvme0n1 : 5.00 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:36:39.701 [2024-10-08T15:52:31.693Z] =================================================================================================================== 00:36:39.701 [2024-10-08T15:52:31.693Z] Total : 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:36:39.701 00:36:40.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:40.643 Nvme0n1 : 6.00 20489.33 80.04 0.00 0.00 0.00 0.00 0.00 00:36:40.643 [2024-10-08T15:52:32.635Z] =================================================================================================================== 00:36:40.643 [2024-10-08T15:52:32.635Z] Total : 20489.33 80.04 0.00 0.00 0.00 0.00 0.00 00:36:40.643 00:36:41.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:41.585 Nvme0n1 : 7.00 21209.00 82.85 0.00 0.00 0.00 0.00 0.00 00:36:41.585 [2024-10-08T15:52:33.577Z] =================================================================================================================== 00:36:41.585 [2024-10-08T15:52:33.577Z] Total : 21209.00 82.85 0.00 0.00 0.00 0.00 0.00 00:36:41.585 00:36:42.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:42.528 Nvme0n1 : 8.00 21748.75 84.96 0.00 0.00 0.00 0.00 0.00 00:36:42.528 [2024-10-08T15:52:34.520Z] =================================================================================================================== 00:36:42.528 [2024-10-08T15:52:34.520Z] Total : 21748.75 84.96 0.00 0.00 0.00 0.00 0.00 00:36:42.528 00:36:43.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:43.468 Nvme0n1 : 9.00 22175.67 86.62 0.00 0.00 0.00 0.00 0.00 00:36:43.468 [2024-10-08T15:52:35.460Z] =================================================================================================================== 00:36:43.468 [2024-10-08T15:52:35.460Z] Total : 22175.67 86.62 0.00 0.00 0.00 0.00 0.00 00:36:43.468 00:36:44.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:44.409 Nvme0n1 : 10.00 22522.00 87.98 0.00 0.00 0.00 0.00 0.00 00:36:44.409 [2024-10-08T15:52:36.401Z] =================================================================================================================== 00:36:44.409 [2024-10-08T15:52:36.401Z] Total : 22522.00 87.98 0.00 0.00 0.00 0.00 0.00 00:36:44.409 00:36:44.409 00:36:44.409 Latency(us) 00:36:44.409 [2024-10-08T15:52:36.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:44.409 Nvme0n1 : 10.01 22522.44 87.98 0.00 0.00 5680.09 2894.51 30801.92 00:36:44.409 [2024-10-08T15:52:36.402Z] =================================================================================================================== 00:36:44.410 [2024-10-08T15:52:36.402Z] Total : 22522.44 87.98 0.00 0.00 5680.09 2894.51 30801.92 00:36:44.410 { 00:36:44.410 "results": [ 00:36:44.410 { 00:36:44.410 "job": "Nvme0n1", 00:36:44.410 "core_mask": "0x2", 00:36:44.410 "workload": "randwrite", 00:36:44.410 "status": "finished", 00:36:44.410 "queue_depth": 128, 00:36:44.410 "io_size": 4096, 00:36:44.410 "runtime": 10.005487, 00:36:44.410 "iops": 22522.441936109655, 00:36:44.410 "mibps": 87.97828881292834, 00:36:44.410 "io_failed": 0, 00:36:44.410 "io_timeout": 0, 00:36:44.410 "avg_latency_us": 5680.092877268343, 00:36:44.410 "min_latency_us": 2894.5066666666667, 00:36:44.410 "max_latency_us": 30801.92 00:36:44.410 } 00:36:44.410 ], 00:36:44.410 "core_count": 1 00:36:44.410 } 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 605732 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 605732 ']' 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 605732 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:44.410 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 605732 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 605732' 00:36:44.670 killing process with pid 605732 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 605732 00:36:44.670 Received shutdown signal, test time was about 10.000000 seconds 00:36:44.670 00:36:44.670 Latency(us) 00:36:44.670 [2024-10-08T15:52:36.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.670 [2024-10-08T15:52:36.662Z] =================================================================================================================== 00:36:44.670 [2024-10-08T15:52:36.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 605732 00:36:44.670 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:44.931 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:44.931 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:44.931 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:45.191 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:45.191 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:45.191 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:45.452 [2024-10-08 17:52:37.238367] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:45.452 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:45.713 request: 00:36:45.713 { 00:36:45.713 "uuid": "57b29f7e-8ad5-434b-a29a-93dacb5e5926", 00:36:45.713 "method": "bdev_lvol_get_lvstores", 00:36:45.713 "req_id": 1 00:36:45.713 } 00:36:45.713 Got JSON-RPC error response 00:36:45.713 response: 00:36:45.713 { 00:36:45.713 "code": -19, 00:36:45.713 "message": "No such device" 00:36:45.713 } 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:45.713 aio_bdev 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e23c17a-7cf5-4b23-a512-b109accd1e9a 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4e23c17a-7cf5-4b23-a512-b109accd1e9a 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:45.713 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:45.973 17:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4e23c17a-7cf5-4b23-a512-b109accd1e9a -t 2000 00:36:46.234 [ 00:36:46.234 { 00:36:46.234 "name": "4e23c17a-7cf5-4b23-a512-b109accd1e9a", 00:36:46.234 "aliases": [ 00:36:46.234 "lvs/lvol" 00:36:46.234 ], 00:36:46.234 "product_name": "Logical Volume", 00:36:46.234 "block_size": 4096, 00:36:46.234 "num_blocks": 38912, 00:36:46.234 "uuid": "4e23c17a-7cf5-4b23-a512-b109accd1e9a", 00:36:46.234 "assigned_rate_limits": { 00:36:46.234 "rw_ios_per_sec": 0, 00:36:46.234 "rw_mbytes_per_sec": 0, 00:36:46.234 "r_mbytes_per_sec": 0, 00:36:46.234 "w_mbytes_per_sec": 0 00:36:46.234 }, 00:36:46.234 "claimed": false, 00:36:46.234 "zoned": false, 00:36:46.234 "supported_io_types": { 00:36:46.234 "read": true, 00:36:46.234 "write": true, 00:36:46.234 "unmap": true, 00:36:46.234 "flush": false, 00:36:46.234 "reset": true, 00:36:46.234 "nvme_admin": false, 00:36:46.234 "nvme_io": false, 00:36:46.234 "nvme_io_md": false, 00:36:46.234 "write_zeroes": true, 00:36:46.234 "zcopy": false, 00:36:46.234 "get_zone_info": false, 00:36:46.234 "zone_management": false, 00:36:46.234 "zone_append": false, 00:36:46.234 "compare": false, 00:36:46.234 "compare_and_write": false, 00:36:46.234 "abort": false, 00:36:46.234 "seek_hole": true, 00:36:46.234 "seek_data": true, 00:36:46.234 "copy": false, 00:36:46.234 "nvme_iov_md": false 00:36:46.234 }, 00:36:46.234 "driver_specific": { 00:36:46.234 "lvol": { 00:36:46.234 "lvol_store_uuid": "57b29f7e-8ad5-434b-a29a-93dacb5e5926", 00:36:46.234 "base_bdev": "aio_bdev", 00:36:46.234 "thin_provision": false, 00:36:46.234 "num_allocated_clusters": 38, 00:36:46.234 "snapshot": false, 00:36:46.234 "clone": false, 00:36:46.234 "esnap_clone": false 00:36:46.234 } 00:36:46.234 } 00:36:46.234 } 00:36:46.234 ] 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:46.234 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:46.498 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:46.498 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e23c17a-7cf5-4b23-a512-b109accd1e9a 00:36:46.761 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57b29f7e-8ad5-434b-a29a-93dacb5e5926 00:36:47.021 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:47.021 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:47.021 00:36:47.021 real 0m16.001s 00:36:47.021 user 0m15.649s 00:36:47.021 sys 0m1.440s 00:36:47.021 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:47.021 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:47.021 ************************************ 00:36:47.021 END TEST lvs_grow_clean 00:36:47.021 ************************************ 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:47.282 ************************************ 00:36:47.282 START TEST lvs_grow_dirty 00:36:47.282 ************************************ 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:47.282 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:47.542 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:47.542 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:47.542 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:36:47.542 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:36:47.542 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:47.803 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:47.803 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:47.803 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 lvol 150 00:36:48.064 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=93038ac2-e3f7-4145-ae86-19743b599489 00:36:48.064 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:48.064 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:48.064 [2024-10-08 17:52:39.966279] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:48.064 [2024-10-08 17:52:39.966446] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:48.064 true 00:36:48.064 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:36:48.064 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:48.325 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:48.325 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:48.587 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93038ac2-e3f7-4145-ae86-19743b599489 00:36:48.587 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.848 [2024-10-08 17:52:40.674885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.848 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=608725 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 608725 /var/tmp/bdevperf.sock 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 608725 ']' 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:49.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:49.109 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:49.109 [2024-10-08 17:52:40.939082] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:36:49.109 [2024-10-08 17:52:40.939159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608725 ] 00:36:49.109 [2024-10-08 17:52:41.021432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.109 [2024-10-08 17:52:41.091474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.050 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:50.050 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:36:50.050 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:50.311 Nvme0n1 00:36:50.311 17:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:50.311 [ 00:36:50.311 { 00:36:50.311 "name": "Nvme0n1", 00:36:50.311 "aliases": [ 00:36:50.311 "93038ac2-e3f7-4145-ae86-19743b599489" 00:36:50.311 ], 00:36:50.311 "product_name": "NVMe disk", 00:36:50.311 "block_size": 4096, 00:36:50.311 "num_blocks": 38912, 00:36:50.311 "uuid": "93038ac2-e3f7-4145-ae86-19743b599489", 00:36:50.311 "numa_id": 0, 00:36:50.311 "assigned_rate_limits": { 00:36:50.311 "rw_ios_per_sec": 0, 00:36:50.311 "rw_mbytes_per_sec": 0, 00:36:50.311 "r_mbytes_per_sec": 0, 00:36:50.311 "w_mbytes_per_sec": 0 00:36:50.311 }, 00:36:50.311 "claimed": false, 00:36:50.311 "zoned": false, 00:36:50.311 "supported_io_types": { 00:36:50.311 "read": true, 00:36:50.311 "write": true, 00:36:50.311 "unmap": true, 00:36:50.311 "flush": true, 00:36:50.311 "reset": true, 00:36:50.311 "nvme_admin": true, 00:36:50.311 "nvme_io": true, 00:36:50.311 "nvme_io_md": false, 00:36:50.311 "write_zeroes": true, 00:36:50.311 "zcopy": false, 00:36:50.311 "get_zone_info": false, 00:36:50.311 "zone_management": false, 00:36:50.311 "zone_append": false, 00:36:50.311 "compare": true, 00:36:50.311 "compare_and_write": true, 00:36:50.311 "abort": true, 00:36:50.311 "seek_hole": false, 00:36:50.311 "seek_data": false, 00:36:50.311 "copy": true, 00:36:50.311 "nvme_iov_md": false 00:36:50.311 }, 00:36:50.311 "memory_domains": [ 00:36:50.311 { 00:36:50.311 "dma_device_id": "system", 00:36:50.311 "dma_device_type": 1 00:36:50.311 } 00:36:50.311 ], 00:36:50.311 "driver_specific": { 00:36:50.311 "nvme": [ 00:36:50.311 { 00:36:50.311 "trid": { 00:36:50.311 "trtype": "TCP", 00:36:50.311 "adrfam": "IPv4", 00:36:50.311 "traddr": "10.0.0.2", 00:36:50.311 "trsvcid": "4420", 00:36:50.311 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:50.311 }, 00:36:50.311 "ctrlr_data": { 00:36:50.311 "cntlid": 1, 00:36:50.311 "vendor_id": "0x8086", 00:36:50.311 "model_number": "SPDK bdev Controller", 00:36:50.311 "serial_number": "SPDK0", 00:36:50.311 "firmware_revision": "25.01", 00:36:50.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.311 "oacs": { 00:36:50.311 "security": 0, 00:36:50.311 "format": 0, 00:36:50.311 "firmware": 0, 00:36:50.311 "ns_manage": 0 00:36:50.311 }, 00:36:50.311 "multi_ctrlr": true, 00:36:50.311 "ana_reporting": false 00:36:50.311 }, 00:36:50.311 "vs": { 00:36:50.311 "nvme_version": "1.3" 00:36:50.311 }, 00:36:50.311 "ns_data": { 00:36:50.311 "id": 1, 00:36:50.311 "can_share": true 00:36:50.311 } 00:36:50.311 } 00:36:50.311 ], 00:36:50.311 "mp_policy": "active_passive" 00:36:50.311 } 00:36:50.311 } 00:36:50.311 ] 00:36:50.311 17:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=608887 00:36:50.311 17:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:50.311 17:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:50.571 Running I/O for 10 seconds... 00:36:51.512 Latency(us) 00:36:51.512 [2024-10-08T15:52:43.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:51.512 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:36:51.512 [2024-10-08T15:52:43.504Z] =================================================================================================================== 00:36:51.512 [2024-10-08T15:52:43.505Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:36:51.513 00:36:52.455 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:36:52.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:52.455 Nvme0n1 : 2.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:36:52.455 [2024-10-08T15:52:44.447Z] =================================================================================================================== 00:36:52.455 [2024-10-08T15:52:44.447Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:36:52.455 00:36:52.716 true 00:36:52.716 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:36:52.716 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:52.716 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:52.716 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:52.716 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 608887 00:36:53.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:53.668 Nvme0n1 : 3.00 17886.00 69.87 0.00 0.00 0.00 0.00 0.00 00:36:53.668 [2024-10-08T15:52:45.660Z] =================================================================================================================== 00:36:53.668 [2024-10-08T15:52:45.660Z] Total : 17886.00 69.87 0.00 0.00 0.00 0.00 0.00 00:36:53.668 00:36:54.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:54.609 Nvme0n1 : 4.00 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:36:54.609 [2024-10-08T15:52:46.601Z] =================================================================================================================== 00:36:54.609 [2024-10-08T15:52:46.602Z] Total : 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:36:54.610 00:36:55.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:55.551 Nvme0n1 : 5.00 18719.80 73.12 0.00 0.00 0.00 0.00 0.00 00:36:55.551 [2024-10-08T15:52:47.543Z] =================================================================================================================== 00:36:55.551 [2024-10-08T15:52:47.543Z] Total : 18719.80 73.12 0.00 0.00 0.00 0.00 0.00 00:36:55.551 00:36:56.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:56.492 Nvme0n1 : 6.00 19854.33 77.56 0.00 0.00 0.00 0.00 0.00 00:36:56.492 [2024-10-08T15:52:48.484Z] =================================================================================================================== 00:36:56.492 [2024-10-08T15:52:48.484Z] Total : 19854.33 77.56 0.00 0.00 0.00 0.00 0.00 00:36:56.492 00:36:57.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:57.433 Nvme0n1 : 7.00 20667.14 80.73 0.00 0.00 0.00 0.00 0.00 00:36:57.433 [2024-10-08T15:52:49.425Z] =================================================================================================================== 00:36:57.433 [2024-10-08T15:52:49.425Z] Total : 20667.14 80.73 0.00 0.00 0.00 0.00 0.00 00:36:57.433 00:36:58.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:58.819 Nvme0n1 : 8.00 21274.62 83.10 0.00 0.00 0.00 0.00 0.00 00:36:58.819 [2024-10-08T15:52:50.811Z] =================================================================================================================== 00:36:58.819 [2024-10-08T15:52:50.811Z] Total : 21274.62 83.10 0.00 0.00 0.00 0.00 0.00 00:36:58.819 00:36:59.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:59.760 Nvme0n1 : 9.00 21754.33 84.98 0.00 0.00 0.00 0.00 0.00 00:36:59.760 [2024-10-08T15:52:51.752Z] =================================================================================================================== 00:36:59.760 [2024-10-08T15:52:51.752Z] Total : 21754.33 84.98 0.00 0.00 0.00 0.00 0.00 00:36:59.760 00:37:00.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:00.702 Nvme0n1 : 10.00 22141.30 86.49 0.00 0.00 0.00 0.00 0.00 00:37:00.702 [2024-10-08T15:52:52.694Z] =================================================================================================================== 00:37:00.702 [2024-10-08T15:52:52.694Z] Total : 22141.30 86.49 0.00 0.00 0.00 0.00 0.00 00:37:00.702 00:37:00.702 00:37:00.702 Latency(us) 00:37:00.702 [2024-10-08T15:52:52.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:00.702 Nvme0n1 : 10.01 22142.92 86.50 0.00 0.00 5778.18 2962.77 31020.37 00:37:00.702 [2024-10-08T15:52:52.694Z] =================================================================================================================== 00:37:00.702 [2024-10-08T15:52:52.694Z] Total : 22142.92 86.50 0.00 0.00 5778.18 2962.77 31020.37 00:37:00.702 { 00:37:00.702 "results": [ 00:37:00.702 { 00:37:00.702 "job": "Nvme0n1", 00:37:00.702 "core_mask": "0x2", 00:37:00.702 "workload": "randwrite", 00:37:00.702 "status": "finished", 00:37:00.702 "queue_depth": 128, 00:37:00.702 "io_size": 4096, 00:37:00.702 "runtime": 10.005051, 00:37:00.702 "iops": 22142.91561332371, 00:37:00.702 "mibps": 86.49576411454574, 00:37:00.702 "io_failed": 0, 00:37:00.702 "io_timeout": 0, 00:37:00.702 "avg_latency_us": 5778.177397050659, 00:37:00.702 "min_latency_us": 2962.7733333333335, 00:37:00.702 "max_latency_us": 31020.373333333333 00:37:00.702 } 00:37:00.702 ], 00:37:00.702 "core_count": 1 00:37:00.702 } 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 608725 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 608725 ']' 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 608725 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 608725 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 608725' 00:37:00.702 killing process with pid 608725 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 608725 00:37:00.702 Received shutdown signal, test time was about 10.000000 seconds 00:37:00.702 00:37:00.702 Latency(us) 00:37:00.702 [2024-10-08T15:52:52.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.702 [2024-10-08T15:52:52.694Z] =================================================================================================================== 00:37:00.702 [2024-10-08T15:52:52.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 608725 00:37:00.702 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:00.964 17:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 605054 00:37:01.224 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 605054 00:37:01.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 605054 Killed "${NVMF_APP[@]}" "$@" 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=611031 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 611031 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 611031 ']' 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:01.486 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:01.486 [2024-10-08 17:52:53.295343] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:01.486 [2024-10-08 17:52:53.296380] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:01.486 [2024-10-08 17:52:53.296426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.486 [2024-10-08 17:52:53.380601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.486 [2024-10-08 17:52:53.436588] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.486 [2024-10-08 17:52:53.436620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.486 [2024-10-08 17:52:53.436626] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.486 [2024-10-08 17:52:53.436630] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.486 [2024-10-08 17:52:53.436635] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.486 [2024-10-08 17:52:53.437094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.746 [2024-10-08 17:52:53.487031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:01.746 [2024-10-08 17:52:53.487237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.317 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:02.317 [2024-10-08 17:52:54.291105] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:02.318 [2024-10-08 17:52:54.291327] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:02.318 [2024-10-08 17:52:54.291416] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 93038ac2-e3f7-4145-ae86-19743b599489 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=93038ac2-e3f7-4145-ae86-19743b599489 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:02.578 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93038ac2-e3f7-4145-ae86-19743b599489 -t 2000 00:37:02.839 [ 00:37:02.839 { 00:37:02.839 "name": "93038ac2-e3f7-4145-ae86-19743b599489", 00:37:02.839 "aliases": [ 00:37:02.839 "lvs/lvol" 00:37:02.839 ], 00:37:02.839 "product_name": "Logical Volume", 00:37:02.839 "block_size": 4096, 00:37:02.839 "num_blocks": 38912, 00:37:02.839 "uuid": "93038ac2-e3f7-4145-ae86-19743b599489", 00:37:02.839 "assigned_rate_limits": { 00:37:02.839 "rw_ios_per_sec": 0, 00:37:02.839 "rw_mbytes_per_sec": 0, 00:37:02.839 "r_mbytes_per_sec": 0, 00:37:02.839 "w_mbytes_per_sec": 0 00:37:02.839 }, 00:37:02.839 "claimed": false, 00:37:02.839 "zoned": false, 00:37:02.839 "supported_io_types": { 00:37:02.839 "read": true, 00:37:02.839 "write": true, 00:37:02.839 "unmap": true, 00:37:02.839 "flush": false, 00:37:02.839 "reset": true, 00:37:02.839 "nvme_admin": false, 00:37:02.839 "nvme_io": false, 00:37:02.839 "nvme_io_md": false, 00:37:02.839 "write_zeroes": true, 00:37:02.839 "zcopy": false, 00:37:02.839 "get_zone_info": false, 00:37:02.839 "zone_management": false, 00:37:02.839 "zone_append": false, 00:37:02.839 "compare": false, 00:37:02.839 "compare_and_write": false, 00:37:02.839 "abort": false, 00:37:02.839 "seek_hole": true, 00:37:02.839 "seek_data": true, 00:37:02.839 "copy": false, 00:37:02.839 "nvme_iov_md": false 00:37:02.839 }, 00:37:02.839 "driver_specific": { 00:37:02.839 "lvol": { 00:37:02.839 "lvol_store_uuid": "ed1abfec-a4cc-4c10-91a7-6add7e8bce43", 00:37:02.839 "base_bdev": "aio_bdev", 00:37:02.839 "thin_provision": false, 00:37:02.839 "num_allocated_clusters": 38, 00:37:02.839 "snapshot": false, 00:37:02.839 "clone": false, 00:37:02.839 "esnap_clone": false 00:37:02.839 } 00:37:02.839 } 00:37:02.839 } 00:37:02.839 ] 00:37:02.839 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:02.839 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:02.839 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:03.101 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:03.101 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:03.101 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:03.101 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:03.101 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:03.362 [2024-10-08 17:52:55.149566] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:03.362 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:03.623 request: 00:37:03.623 { 00:37:03.623 "uuid": "ed1abfec-a4cc-4c10-91a7-6add7e8bce43", 00:37:03.623 "method": "bdev_lvol_get_lvstores", 00:37:03.623 "req_id": 1 00:37:03.623 } 00:37:03.623 Got JSON-RPC error response 00:37:03.623 response: 00:37:03.623 { 00:37:03.623 "code": -19, 00:37:03.623 "message": "No such device" 00:37:03.623 } 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:03.623 aio_bdev 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 93038ac2-e3f7-4145-ae86-19743b599489 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=93038ac2-e3f7-4145-ae86-19743b599489 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:03.623 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:03.883 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93038ac2-e3f7-4145-ae86-19743b599489 -t 2000 00:37:04.144 [ 00:37:04.144 { 00:37:04.144 "name": "93038ac2-e3f7-4145-ae86-19743b599489", 00:37:04.144 "aliases": [ 00:37:04.144 "lvs/lvol" 00:37:04.144 ], 00:37:04.144 "product_name": "Logical Volume", 00:37:04.144 "block_size": 4096, 00:37:04.144 "num_blocks": 38912, 00:37:04.144 "uuid": "93038ac2-e3f7-4145-ae86-19743b599489", 00:37:04.144 "assigned_rate_limits": { 00:37:04.144 "rw_ios_per_sec": 0, 00:37:04.144 "rw_mbytes_per_sec": 0, 00:37:04.144 "r_mbytes_per_sec": 0, 00:37:04.144 "w_mbytes_per_sec": 0 00:37:04.144 }, 00:37:04.144 "claimed": false, 00:37:04.144 "zoned": false, 00:37:04.144 "supported_io_types": { 00:37:04.144 "read": true, 00:37:04.144 "write": true, 00:37:04.144 "unmap": true, 00:37:04.144 "flush": false, 00:37:04.144 "reset": true, 00:37:04.144 "nvme_admin": false, 00:37:04.144 "nvme_io": false, 00:37:04.144 "nvme_io_md": false, 00:37:04.144 "write_zeroes": true, 00:37:04.144 "zcopy": false, 00:37:04.144 "get_zone_info": false, 00:37:04.144 "zone_management": false, 00:37:04.144 "zone_append": false, 00:37:04.144 "compare": false, 00:37:04.144 "compare_and_write": false, 00:37:04.144 "abort": false, 00:37:04.144 "seek_hole": true, 00:37:04.144 "seek_data": true, 00:37:04.144 "copy": false, 00:37:04.144 "nvme_iov_md": false 00:37:04.144 }, 00:37:04.144 "driver_specific": { 00:37:04.144 "lvol": { 00:37:04.144 "lvol_store_uuid": "ed1abfec-a4cc-4c10-91a7-6add7e8bce43", 00:37:04.144 "base_bdev": "aio_bdev", 00:37:04.144 "thin_provision": false, 00:37:04.144 "num_allocated_clusters": 38, 00:37:04.144 "snapshot": false, 00:37:04.144 "clone": false, 00:37:04.144 "esnap_clone": false 00:37:04.144 } 00:37:04.144 } 00:37:04.144 } 00:37:04.144 ] 00:37:04.144 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:04.144 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:04.144 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:04.144 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:04.144 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:04.144 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:04.405 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:04.405 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 93038ac2-e3f7-4145-ae86-19743b599489 00:37:04.405 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed1abfec-a4cc-4c10-91a7-6add7e8bce43 00:37:04.665 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:04.927 00:37:04.927 real 0m17.738s 00:37:04.927 user 0m35.669s 00:37:04.927 sys 0m3.080s 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:04.927 ************************************ 00:37:04.927 END TEST lvs_grow_dirty 00:37:04.927 ************************************ 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:04.927 nvmf_trace.0 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.927 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.927 rmmod nvme_tcp 00:37:05.188 rmmod nvme_fabrics 00:37:05.188 rmmod nvme_keyring 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 611031 ']' 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 611031 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 611031 ']' 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 611031 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:05.188 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 611031 00:37:05.188 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:05.188 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:05.188 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 611031' 00:37:05.188 killing process with pid 611031 00:37:05.188 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 611031 00:37:05.188 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 611031 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.450 17:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.363 00:37:07.363 real 0m45.257s 00:37:07.363 user 0m54.380s 00:37:07.363 sys 0m10.701s 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:07.363 ************************************ 00:37:07.363 END TEST nvmf_lvs_grow 00:37:07.363 ************************************ 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.363 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.625 ************************************ 00:37:07.625 START TEST nvmf_bdev_io_wait 00:37:07.625 ************************************ 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:07.625 * Looking for test storage... 00:37:07.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.625 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:07.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.626 --rc genhtml_branch_coverage=1 00:37:07.626 --rc genhtml_function_coverage=1 00:37:07.626 --rc genhtml_legend=1 00:37:07.626 --rc geninfo_all_blocks=1 00:37:07.626 --rc geninfo_unexecuted_blocks=1 00:37:07.626 00:37:07.626 ' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:07.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.626 --rc genhtml_branch_coverage=1 00:37:07.626 --rc genhtml_function_coverage=1 00:37:07.626 --rc genhtml_legend=1 00:37:07.626 --rc geninfo_all_blocks=1 00:37:07.626 --rc geninfo_unexecuted_blocks=1 00:37:07.626 00:37:07.626 ' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:07.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.626 --rc genhtml_branch_coverage=1 00:37:07.626 --rc genhtml_function_coverage=1 00:37:07.626 --rc genhtml_legend=1 00:37:07.626 --rc geninfo_all_blocks=1 00:37:07.626 --rc geninfo_unexecuted_blocks=1 00:37:07.626 00:37:07.626 ' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:07.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.626 --rc genhtml_branch_coverage=1 00:37:07.626 --rc genhtml_function_coverage=1 00:37:07.626 --rc genhtml_legend=1 00:37:07.626 --rc geninfo_all_blocks=1 00:37:07.626 --rc geninfo_unexecuted_blocks=1 00:37:07.626 00:37:07.626 ' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.626 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.887 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:16.033 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:16.034 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:16.034 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:16.034 Found net devices under 0000:31:00.0: cvl_0_0 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:16.034 Found net devices under 0000:31:00.1: cvl_0_1 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:16.034 17:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:16.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:16.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:37:16.034 00:37:16.034 --- 10.0.0.2 ping statistics --- 00:37:16.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.034 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:16.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:16.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:37:16.034 00:37:16.034 --- 10.0.0.1 ping statistics --- 00:37:16.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.034 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:16.034 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=615992 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 615992 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 615992 ']' 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.035 17:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.035 [2024-10-08 17:53:07.341964] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:16.035 [2024-10-08 17:53:07.343128] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:16.035 [2024-10-08 17:53:07.343183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.035 [2024-10-08 17:53:07.432203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:16.035 [2024-10-08 17:53:07.528553] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.035 [2024-10-08 17:53:07.528616] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.035 [2024-10-08 17:53:07.528628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.035 [2024-10-08 17:53:07.528638] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.035 [2024-10-08 17:53:07.528648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.035 [2024-10-08 17:53:07.530797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.035 [2024-10-08 17:53:07.530960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:16.035 [2024-10-08 17:53:07.531123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:16.035 [2024-10-08 17:53:07.531123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.035 [2024-10-08 17:53:07.531809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:16.296 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.296 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:37:16.296 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.297 [2024-10-08 17:53:08.267443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:16.297 [2024-10-08 17:53:08.268403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:16.297 [2024-10-08 17:53:08.268463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:16.297 [2024-10-08 17:53:08.268649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.297 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.297 [2024-10-08 17:53:08.280053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 Malloc0 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 [2024-10-08 17:53:08.368685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=616340 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=616342 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.558 { 00:37:16.558 "params": { 00:37:16.558 "name": "Nvme$subsystem", 00:37:16.558 "trtype": "$TEST_TRANSPORT", 00:37:16.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.558 "adrfam": "ipv4", 00:37:16.558 "trsvcid": "$NVMF_PORT", 00:37:16.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.558 "hdgst": ${hdgst:-false}, 00:37:16.558 "ddgst": ${ddgst:-false} 00:37:16.558 }, 00:37:16.558 "method": "bdev_nvme_attach_controller" 00:37:16.558 } 00:37:16.558 EOF 00:37:16.558 )") 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=616344 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.558 { 00:37:16.558 "params": { 00:37:16.558 "name": "Nvme$subsystem", 00:37:16.558 "trtype": "$TEST_TRANSPORT", 00:37:16.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.558 "adrfam": "ipv4", 00:37:16.558 "trsvcid": "$NVMF_PORT", 00:37:16.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.558 "hdgst": ${hdgst:-false}, 00:37:16.558 "ddgst": ${ddgst:-false} 00:37:16.558 }, 00:37:16.558 "method": "bdev_nvme_attach_controller" 00:37:16.558 } 00:37:16.558 EOF 00:37:16.558 )") 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=616347 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.558 { 00:37:16.558 "params": { 00:37:16.558 "name": "Nvme$subsystem", 00:37:16.558 "trtype": "$TEST_TRANSPORT", 00:37:16.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.558 "adrfam": "ipv4", 00:37:16.558 "trsvcid": "$NVMF_PORT", 00:37:16.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.558 "hdgst": ${hdgst:-false}, 00:37:16.558 "ddgst": ${ddgst:-false} 00:37:16.558 }, 00:37:16.558 "method": "bdev_nvme_attach_controller" 00:37:16.558 } 00:37:16.558 EOF 00:37:16.558 )") 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.558 { 00:37:16.558 "params": { 00:37:16.558 "name": "Nvme$subsystem", 00:37:16.558 "trtype": "$TEST_TRANSPORT", 00:37:16.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.558 "adrfam": "ipv4", 00:37:16.558 "trsvcid": "$NVMF_PORT", 00:37:16.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.558 "hdgst": ${hdgst:-false}, 00:37:16.558 "ddgst": ${ddgst:-false} 00:37:16.558 }, 00:37:16.558 "method": "bdev_nvme_attach_controller" 00:37:16.558 } 00:37:16.558 EOF 00:37:16.558 )") 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 616340 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.558 "params": { 00:37:16.558 "name": "Nvme1", 00:37:16.558 "trtype": "tcp", 00:37:16.558 "traddr": "10.0.0.2", 00:37:16.558 "adrfam": "ipv4", 00:37:16.558 "trsvcid": "4420", 00:37:16.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.558 "hdgst": false, 00:37:16.558 "ddgst": false 00:37:16.558 }, 00:37:16.558 "method": "bdev_nvme_attach_controller" 00:37:16.558 }' 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:37:16.558 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:37:16.559 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.559 "params": { 00:37:16.559 "name": "Nvme1", 00:37:16.559 "trtype": "tcp", 00:37:16.559 "traddr": "10.0.0.2", 00:37:16.559 "adrfam": "ipv4", 00:37:16.559 "trsvcid": "4420", 00:37:16.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.559 "hdgst": false, 00:37:16.559 "ddgst": false 00:37:16.559 }, 00:37:16.559 "method": "bdev_nvme_attach_controller" 00:37:16.559 }' 00:37:16.559 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:37:16.559 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.559 "params": { 00:37:16.559 "name": "Nvme1", 00:37:16.559 "trtype": "tcp", 00:37:16.559 "traddr": "10.0.0.2", 00:37:16.559 "adrfam": "ipv4", 00:37:16.559 "trsvcid": "4420", 00:37:16.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.559 "hdgst": false, 00:37:16.559 "ddgst": false 00:37:16.559 }, 00:37:16.559 "method": "bdev_nvme_attach_controller" 00:37:16.559 }' 00:37:16.559 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:37:16.559 17:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.559 "params": { 00:37:16.559 "name": "Nvme1", 00:37:16.559 "trtype": "tcp", 00:37:16.559 "traddr": "10.0.0.2", 00:37:16.559 "adrfam": "ipv4", 00:37:16.559 "trsvcid": "4420", 00:37:16.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.559 "hdgst": false, 00:37:16.559 "ddgst": false 00:37:16.559 }, 00:37:16.559 "method": "bdev_nvme_attach_controller" 00:37:16.559 }' 00:37:16.559 [2024-10-08 17:53:08.426504] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:16.559 [2024-10-08 17:53:08.426574] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:16.559 [2024-10-08 17:53:08.429129] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:16.559 [2024-10-08 17:53:08.429191] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:16.559 [2024-10-08 17:53:08.432369] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:16.559 [2024-10-08 17:53:08.432378] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:16.559 [2024-10-08 17:53:08.432440] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-10-08 17:53:08.432440] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:37:16.559 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:37:16.819 [2024-10-08 17:53:08.641548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.819 [2024-10-08 17:53:08.713850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:37:16.820 [2024-10-08 17:53:08.732289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.820 [2024-10-08 17:53:08.799262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.820 [2024-10-08 17:53:08.804061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:37:17.080 [2024-10-08 17:53:08.861622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:37:17.080 [2024-10-08 17:53:08.868552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.080 [2024-10-08 17:53:08.931770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:37:17.080 Running I/O for 1 seconds... 00:37:17.340 Running I/O for 1 seconds... 00:37:17.340 Running I/O for 1 seconds... 00:37:17.604 Running I/O for 1 seconds... 00:37:18.176 8451.00 IOPS, 33.01 MiB/s 00:37:18.176 Latency(us) 00:37:18.176 [2024-10-08T15:53:10.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.176 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:18.176 Nvme1n1 : 1.02 8436.45 32.95 0.00 0.00 15065.89 2402.99 24903.68 00:37:18.176 [2024-10-08T15:53:10.168Z] =================================================================================================================== 00:37:18.176 [2024-10-08T15:53:10.168Z] Total : 8436.45 32.95 0.00 0.00 15065.89 2402.99 24903.68 00:37:18.438 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 616342 00:37:18.438 7440.00 IOPS, 29.06 MiB/s [2024-10-08T15:53:10.430Z] 13013.00 IOPS, 50.83 MiB/s 00:37:18.438 Latency(us) 00:37:18.438 [2024-10-08T15:53:10.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.438 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:18.438 Nvme1n1 : 1.01 7525.86 29.40 0.00 0.00 16951.68 5133.65 35826.35 00:37:18.438 [2024-10-08T15:53:10.430Z] =================================================================================================================== 00:37:18.438 [2024-10-08T15:53:10.430Z] Total : 7525.86 29.40 0.00 0.00 16951.68 5133.65 35826.35 00:37:18.438 00:37:18.438 Latency(us) 00:37:18.438 [2024-10-08T15:53:10.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.438 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:18.438 Nvme1n1 : 1.01 13085.98 51.12 0.00 0.00 9753.57 4096.00 15947.09 00:37:18.438 [2024-10-08T15:53:10.430Z] =================================================================================================================== 00:37:18.438 [2024-10-08T15:53:10.430Z] Total : 13085.98 51.12 0.00 0.00 9753.57 4096.00 15947.09 00:37:18.438 188160.00 IOPS, 735.00 MiB/s 00:37:18.438 Latency(us) 00:37:18.438 [2024-10-08T15:53:10.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.438 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:18.438 Nvme1n1 : 1.00 187787.92 733.55 0.00 0.00 677.92 308.91 1979.73 00:37:18.438 [2024-10-08T15:53:10.430Z] =================================================================================================================== 00:37:18.438 [2024-10-08T15:53:10.430Z] Total : 187787.92 733.55 0.00 0.00 677.92 308.91 1979.73 00:37:18.438 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 616344 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 616347 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.700 rmmod nvme_tcp 00:37:18.700 rmmod nvme_fabrics 00:37:18.700 rmmod nvme_keyring 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 615992 ']' 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 615992 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 615992 ']' 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 615992 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.700 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 615992 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 615992' 00:37:18.962 killing process with pid 615992 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 615992 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 615992 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.962 17:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.510 00:37:21.510 real 0m13.562s 00:37:21.510 user 0m17.346s 00:37:21.510 sys 0m7.985s 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:21.510 ************************************ 00:37:21.510 END TEST nvmf_bdev_io_wait 00:37:21.510 ************************************ 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:21.510 17:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.510 ************************************ 00:37:21.510 START TEST nvmf_queue_depth 00:37:21.510 ************************************ 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:21.510 * Looking for test storage... 00:37:21.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:21.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.510 --rc genhtml_branch_coverage=1 00:37:21.510 --rc genhtml_function_coverage=1 00:37:21.510 --rc genhtml_legend=1 00:37:21.510 --rc geninfo_all_blocks=1 00:37:21.510 --rc geninfo_unexecuted_blocks=1 00:37:21.510 00:37:21.510 ' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:21.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.510 --rc genhtml_branch_coverage=1 00:37:21.510 --rc genhtml_function_coverage=1 00:37:21.510 --rc genhtml_legend=1 00:37:21.510 --rc geninfo_all_blocks=1 00:37:21.510 --rc geninfo_unexecuted_blocks=1 00:37:21.510 00:37:21.510 ' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:21.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.510 --rc genhtml_branch_coverage=1 00:37:21.510 --rc genhtml_function_coverage=1 00:37:21.510 --rc genhtml_legend=1 00:37:21.510 --rc geninfo_all_blocks=1 00:37:21.510 --rc geninfo_unexecuted_blocks=1 00:37:21.510 00:37:21.510 ' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:21.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.510 --rc genhtml_branch_coverage=1 00:37:21.510 --rc genhtml_function_coverage=1 00:37:21.510 --rc genhtml_legend=1 00:37:21.510 --rc geninfo_all_blocks=1 00:37:21.510 --rc geninfo_unexecuted_blocks=1 00:37:21.510 00:37:21.510 ' 00:37:21.510 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.511 17:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.651 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:29.652 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:29.652 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:29.652 Found net devices under 0000:31:00.0: cvl_0_0 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:29.652 Found net devices under 0000:31:00.1: cvl_0_1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:37:29.652 00:37:29.652 --- 10.0.0.2 ping statistics --- 00:37:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.652 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:37:29.652 00:37:29.652 --- 10.0.0.1 ping statistics --- 00:37:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.652 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:29.652 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=620973 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 620973 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 620973 ']' 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.653 17:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.653 [2024-10-08 17:53:20.995854] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:29.653 [2024-10-08 17:53:20.996972] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:29.653 [2024-10-08 17:53:20.997032] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.653 [2024-10-08 17:53:21.090345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.653 [2024-10-08 17:53:21.184019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.653 [2024-10-08 17:53:21.184077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.653 [2024-10-08 17:53:21.184086] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.653 [2024-10-08 17:53:21.184094] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.653 [2024-10-08 17:53:21.184100] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.653 [2024-10-08 17:53:21.184930] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.653 [2024-10-08 17:53:21.260971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:29.653 [2024-10-08 17:53:21.261274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.913 [2024-10-08 17:53:21.877794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.913 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:30.174 Malloc0 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:30.174 [2024-10-08 17:53:21.957963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=621134 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 621134 /var/tmp/bdevperf.sock 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 621134 ']' 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:30.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:30.174 17:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:30.174 [2024-10-08 17:53:22.016158] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:37:30.174 [2024-10-08 17:53:22.016222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621134 ] 00:37:30.174 [2024-10-08 17:53:22.097089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.435 [2024-10-08 17:53:22.193180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.007 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:31.007 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:31.008 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:31.008 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.008 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:31.008 NVMe0n1 00:37:31.008 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.008 17:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:31.008 Running I/O for 10 seconds... 00:37:33.335 8346.00 IOPS, 32.60 MiB/s [2024-10-08T15:53:26.268Z] 8704.00 IOPS, 34.00 MiB/s [2024-10-08T15:53:27.210Z] 9095.67 IOPS, 35.53 MiB/s [2024-10-08T15:53:28.150Z] 10048.50 IOPS, 39.25 MiB/s [2024-10-08T15:53:29.092Z] 10734.00 IOPS, 41.93 MiB/s [2024-10-08T15:53:30.035Z] 11238.50 IOPS, 43.90 MiB/s [2024-10-08T15:53:31.420Z] 11554.43 IOPS, 45.13 MiB/s [2024-10-08T15:53:32.360Z] 11773.88 IOPS, 45.99 MiB/s [2024-10-08T15:53:33.302Z] 12005.22 IOPS, 46.90 MiB/s [2024-10-08T15:53:33.302Z] 12182.00 IOPS, 47.59 MiB/s 00:37:41.310 Latency(us) 00:37:41.310 [2024-10-08T15:53:33.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.310 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:41.310 Verification LBA range: start 0x0 length 0x4000 00:37:41.310 NVMe0n1 : 10.06 12209.32 47.69 0.00 0.00 83590.73 24466.77 76021.76 00:37:41.310 [2024-10-08T15:53:33.302Z] =================================================================================================================== 00:37:41.310 [2024-10-08T15:53:33.302Z] Total : 12209.32 47.69 0.00 0.00 83590.73 24466.77 76021.76 00:37:41.310 { 00:37:41.310 "results": [ 00:37:41.310 { 00:37:41.310 "job": "NVMe0n1", 00:37:41.310 "core_mask": "0x1", 00:37:41.310 "workload": "verify", 00:37:41.310 "status": "finished", 00:37:41.310 "verify_range": { 00:37:41.310 "start": 0, 00:37:41.310 "length": 16384 00:37:41.310 }, 00:37:41.310 "queue_depth": 1024, 00:37:41.310 "io_size": 4096, 00:37:41.310 "runtime": 10.060924, 00:37:41.310 "iops": 12209.315963424433, 00:37:41.310 "mibps": 47.69264048212669, 00:37:41.310 "io_failed": 0, 00:37:41.310 "io_timeout": 0, 00:37:41.310 "avg_latency_us": 83590.72998005486, 00:37:41.310 "min_latency_us": 24466.773333333334, 00:37:41.310 "max_latency_us": 76021.76 00:37:41.310 } 00:37:41.310 ], 00:37:41.310 "core_count": 1 00:37:41.310 } 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 621134 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 621134 ']' 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 621134 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621134 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621134' 00:37:41.310 killing process with pid 621134 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 621134 00:37:41.310 Received shutdown signal, test time was about 10.000000 seconds 00:37:41.310 00:37:41.310 Latency(us) 00:37:41.310 [2024-10-08T15:53:33.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.310 [2024-10-08T15:53:33.302Z] =================================================================================================================== 00:37:41.310 [2024-10-08T15:53:33.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 621134 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:41.310 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:41.571 rmmod nvme_tcp 00:37:41.571 rmmod nvme_fabrics 00:37:41.571 rmmod nvme_keyring 00:37:41.571 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 620973 ']' 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 620973 ']' 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620973' 00:37:41.572 killing process with pid 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 620973 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:41.572 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.832 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.833 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.744 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:43.744 00:37:43.744 real 0m22.611s 00:37:43.744 user 0m24.584s 00:37:43.744 sys 0m7.618s 00:37:43.744 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.744 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.744 ************************************ 00:37:43.744 END TEST nvmf_queue_depth 00:37:43.744 ************************************ 00:37:43.745 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:43.745 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:43.745 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.745 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:43.745 ************************************ 00:37:43.745 START TEST nvmf_target_multipath 00:37:43.745 ************************************ 00:37:43.745 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:44.007 * Looking for test storage... 00:37:44.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:44.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.007 --rc genhtml_branch_coverage=1 00:37:44.007 --rc genhtml_function_coverage=1 00:37:44.007 --rc genhtml_legend=1 00:37:44.007 --rc geninfo_all_blocks=1 00:37:44.007 --rc geninfo_unexecuted_blocks=1 00:37:44.007 00:37:44.007 ' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:44.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.007 --rc genhtml_branch_coverage=1 00:37:44.007 --rc genhtml_function_coverage=1 00:37:44.007 --rc genhtml_legend=1 00:37:44.007 --rc geninfo_all_blocks=1 00:37:44.007 --rc geninfo_unexecuted_blocks=1 00:37:44.007 00:37:44.007 ' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:44.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.007 --rc genhtml_branch_coverage=1 00:37:44.007 --rc genhtml_function_coverage=1 00:37:44.007 --rc genhtml_legend=1 00:37:44.007 --rc geninfo_all_blocks=1 00:37:44.007 --rc geninfo_unexecuted_blocks=1 00:37:44.007 00:37:44.007 ' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:44.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.007 --rc genhtml_branch_coverage=1 00:37:44.007 --rc genhtml_function_coverage=1 00:37:44.007 --rc genhtml_legend=1 00:37:44.007 --rc geninfo_all_blocks=1 00:37:44.007 --rc geninfo_unexecuted_blocks=1 00:37:44.007 00:37:44.007 ' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:44.007 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:44.008 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:52.262 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.262 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:52.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:52.263 Found net devices under 0000:31:00.0: cvl_0_0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:52.263 Found net devices under 0000:31:00.1: cvl_0_1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:52.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:52.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:37:52.263 00:37:52.263 --- 10.0.0.2 ping statistics --- 00:37:52.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.263 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:52.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:52.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:37:52.263 00:37:52.263 --- 10.0.0.1 ping statistics --- 00:37:52.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.263 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:52.263 only one NIC for nvmf test 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:52.263 rmmod nvme_tcp 00:37:52.263 rmmod nvme_fabrics 00:37:52.263 rmmod nvme_keyring 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.263 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.264 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:54.288 00:37:54.288 real 0m10.105s 00:37:54.288 user 0m2.205s 00:37:54.288 sys 0m5.840s 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:54.288 ************************************ 00:37:54.288 END TEST nvmf_target_multipath 00:37:54.288 ************************************ 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:54.288 ************************************ 00:37:54.288 START TEST nvmf_zcopy 00:37:54.288 ************************************ 00:37:54.288 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:54.288 * Looking for test storage... 00:37:54.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:54.288 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:54.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.289 --rc genhtml_branch_coverage=1 00:37:54.289 --rc genhtml_function_coverage=1 00:37:54.289 --rc genhtml_legend=1 00:37:54.289 --rc geninfo_all_blocks=1 00:37:54.289 --rc geninfo_unexecuted_blocks=1 00:37:54.289 00:37:54.289 ' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:54.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.289 --rc genhtml_branch_coverage=1 00:37:54.289 --rc genhtml_function_coverage=1 00:37:54.289 --rc genhtml_legend=1 00:37:54.289 --rc geninfo_all_blocks=1 00:37:54.289 --rc geninfo_unexecuted_blocks=1 00:37:54.289 00:37:54.289 ' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:54.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.289 --rc genhtml_branch_coverage=1 00:37:54.289 --rc genhtml_function_coverage=1 00:37:54.289 --rc genhtml_legend=1 00:37:54.289 --rc geninfo_all_blocks=1 00:37:54.289 --rc geninfo_unexecuted_blocks=1 00:37:54.289 00:37:54.289 ' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:54.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.289 --rc genhtml_branch_coverage=1 00:37:54.289 --rc genhtml_function_coverage=1 00:37:54.289 --rc genhtml_legend=1 00:37:54.289 --rc geninfo_all_blocks=1 00:37:54.289 --rc geninfo_unexecuted_blocks=1 00:37:54.289 00:37:54.289 ' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:54.289 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:02.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:02.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:02.510 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:02.511 Found net devices under 0000:31:00.0: cvl_0_0 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:02.511 Found net devices under 0000:31:00.1: cvl_0_1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:02.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:02.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:38:02.511 00:38:02.511 --- 10.0.0.2 ping statistics --- 00:38:02.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.511 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:02.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:02.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:38:02.511 00:38:02.511 --- 10.0.0.1 ping statistics --- 00:38:02.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.511 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=631782 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 631782 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 631782 ']' 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:02.511 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 [2024-10-08 17:53:53.572136] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:02.511 [2024-10-08 17:53:53.573163] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:38:02.511 [2024-10-08 17:53:53.573203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.511 [2024-10-08 17:53:53.660256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.511 [2024-10-08 17:53:53.752891] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.511 [2024-10-08 17:53:53.752955] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.511 [2024-10-08 17:53:53.752965] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.511 [2024-10-08 17:53:53.752972] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.511 [2024-10-08 17:53:53.752989] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.511 [2024-10-08 17:53:53.753771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.511 [2024-10-08 17:53:53.828928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:02.511 [2024-10-08 17:53:53.829221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 [2024-10-08 17:53:54.418621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.511 [2024-10-08 17:53:54.446878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.511 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.512 malloc0 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.512 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:02.772 { 00:38:02.772 "params": { 00:38:02.772 "name": "Nvme$subsystem", 00:38:02.772 "trtype": "$TEST_TRANSPORT", 00:38:02.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.772 "adrfam": "ipv4", 00:38:02.772 "trsvcid": "$NVMF_PORT", 00:38:02.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.772 "hdgst": ${hdgst:-false}, 00:38:02.772 "ddgst": ${ddgst:-false} 00:38:02.772 }, 00:38:02.772 "method": "bdev_nvme_attach_controller" 00:38:02.772 } 00:38:02.772 EOF 00:38:02.772 )") 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:02.772 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:02.772 "params": { 00:38:02.772 "name": "Nvme1", 00:38:02.772 "trtype": "tcp", 00:38:02.772 "traddr": "10.0.0.2", 00:38:02.773 "adrfam": "ipv4", 00:38:02.773 "trsvcid": "4420", 00:38:02.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.773 "hdgst": false, 00:38:02.773 "ddgst": false 00:38:02.773 }, 00:38:02.773 "method": "bdev_nvme_attach_controller" 00:38:02.773 }' 00:38:02.773 [2024-10-08 17:53:54.561506] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:38:02.773 [2024-10-08 17:53:54.561569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631947 ] 00:38:02.773 [2024-10-08 17:53:54.642615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.773 [2024-10-08 17:53:54.726169] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.034 Running I/O for 10 seconds... 00:38:04.920 6575.00 IOPS, 51.37 MiB/s [2024-10-08T15:53:58.300Z] 6571.50 IOPS, 51.34 MiB/s [2024-10-08T15:53:59.244Z] 6536.00 IOPS, 51.06 MiB/s [2024-10-08T15:54:00.188Z] 6529.75 IOPS, 51.01 MiB/s [2024-10-08T15:54:01.132Z] 6945.00 IOPS, 54.26 MiB/s [2024-10-08T15:54:02.074Z] 7391.83 IOPS, 57.75 MiB/s [2024-10-08T15:54:03.017Z] 7707.00 IOPS, 60.21 MiB/s [2024-10-08T15:54:03.958Z] 7943.75 IOPS, 62.06 MiB/s [2024-10-08T15:54:04.900Z] 8133.78 IOPS, 63.55 MiB/s [2024-10-08T15:54:04.900Z] 8283.80 IOPS, 64.72 MiB/s 00:38:12.908 Latency(us) 00:38:12.908 [2024-10-08T15:54:04.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.908 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:12.908 Verification LBA range: start 0x0 length 0x1000 00:38:12.908 Nvme1n1 : 10.01 8286.99 64.74 0.00 0.00 15399.65 1583.79 26651.31 00:38:12.908 [2024-10-08T15:54:04.900Z] =================================================================================================================== 00:38:12.908 [2024-10-08T15:54:04.900Z] Total : 8286.99 64.74 0.00 0.00 15399.65 1583.79 26651.31 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=634051 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:13.169 { 00:38:13.169 "params": { 00:38:13.169 "name": "Nvme$subsystem", 00:38:13.169 "trtype": "$TEST_TRANSPORT", 00:38:13.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:13.169 "adrfam": "ipv4", 00:38:13.169 "trsvcid": "$NVMF_PORT", 00:38:13.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:13.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:13.169 "hdgst": ${hdgst:-false}, 00:38:13.169 "ddgst": ${ddgst:-false} 00:38:13.169 }, 00:38:13.169 "method": "bdev_nvme_attach_controller" 00:38:13.169 } 00:38:13.169 EOF 00:38:13.169 )") 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:13.169 [2024-10-08 17:54:05.026195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.169 [2024-10-08 17:54:05.026223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:13.169 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:13.169 "params": { 00:38:13.169 "name": "Nvme1", 00:38:13.169 "trtype": "tcp", 00:38:13.169 "traddr": "10.0.0.2", 00:38:13.169 "adrfam": "ipv4", 00:38:13.169 "trsvcid": "4420", 00:38:13.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:13.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:13.169 "hdgst": false, 00:38:13.169 "ddgst": false 00:38:13.169 }, 00:38:13.169 "method": "bdev_nvme_attach_controller" 00:38:13.169 }' 00:38:13.169 [2024-10-08 17:54:05.038162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.169 [2024-10-08 17:54:05.038172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.169 [2024-10-08 17:54:05.050160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.050167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.062161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.062168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.067510] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:38:13.170 [2024-10-08 17:54:05.067557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634051 ] 00:38:13.170 [2024-10-08 17:54:05.074160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.074167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.086159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.086167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.098160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.098167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.110160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.110167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.122159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.122166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.134160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.134167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.142461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.170 [2024-10-08 17:54:05.146160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.146170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.170 [2024-10-08 17:54:05.158161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.170 [2024-10-08 17:54:05.158171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.430 [2024-10-08 17:54:05.170161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.430 [2024-10-08 17:54:05.170172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.430 [2024-10-08 17:54:05.182160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.430 [2024-10-08 17:54:05.182171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.194160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.194169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.195694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.431 [2024-10-08 17:54:05.206161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.206175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.218167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.218180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.230162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.230171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.242163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.242173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.254161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.254168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.266168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.266183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.278163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.278172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.290163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.290172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.302163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.302174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.314162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.314172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.358426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.358440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 Running I/O for 5 seconds... 00:38:13.431 [2024-10-08 17:54:05.370164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.370177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.385375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.385391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.398822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.398839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.431 [2024-10-08 17:54:05.413249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.431 [2024-10-08 17:54:05.413266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.426515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.426530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.440007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.440022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.454000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.454014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.467035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.467050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.481628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.481647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.495075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.495089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.509348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.509363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.522439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.522453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.536940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.536955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.550180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.550194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.563488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.563503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.577207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.577222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.590135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.590150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.602984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.691 [2024-10-08 17:54:05.602998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.691 [2024-10-08 17:54:05.617307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.692 [2024-10-08 17:54:05.617322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.692 [2024-10-08 17:54:05.630451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.692 [2024-10-08 17:54:05.630465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.692 [2024-10-08 17:54:05.645347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.692 [2024-10-08 17:54:05.645362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.692 [2024-10-08 17:54:05.658383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.692 [2024-10-08 17:54:05.658398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.692 [2024-10-08 17:54:05.671359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.692 [2024-10-08 17:54:05.671374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.685551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.685567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.698666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.698680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.712631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.712646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.725738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.725753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.739146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.739165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.753245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.753260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.766316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.766331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.778858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.778872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.792805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.792820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.805843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.805857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.818515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.818528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.834091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.834105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.846834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.846848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.861284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.861299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.874320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.874335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.887245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.887260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.901350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.901365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.914182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.914197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.927162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.927176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:13.953 [2024-10-08 17:54:05.941382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:13.953 [2024-10-08 17:54:05.941397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:05.954591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:05.954605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:05.969550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:05.969565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:05.982730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:05.982744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:05.997220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:05.997234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.010702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.010716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.025701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.025716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.038905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.038919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.053321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.053336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.066559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.066573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.081478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.081492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.094654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.094668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.109164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.109179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.122281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.122295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.134926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.134940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.149465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.149480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.162935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.162950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.176857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.176872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.190162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.190177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.214 [2024-10-08 17:54:06.203461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.214 [2024-10-08 17:54:06.203475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.217283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.217298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.230356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.230370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.243107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.243122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.257704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.257719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.271047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.271062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.285015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.285030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.298089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.298103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.311236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.311250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.325407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.325421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.338110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.338125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.351216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.351231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.365363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.365378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 18989.00 IOPS, 148.35 MiB/s [2024-10-08T15:54:06.467Z] [2024-10-08 17:54:06.378342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.378357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.391402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.391416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.405357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.405371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.418144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.418159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.431690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.431704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.445112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.445126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.475 [2024-10-08 17:54:06.457889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.475 [2024-10-08 17:54:06.457903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.471262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.471276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.486197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.486212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.499179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.499193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.513142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.513158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.526247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.526261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.539051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.539065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.553028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.553042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.566310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.566324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.579275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.579289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.593532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.593546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.606685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.606698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.621483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.621497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.634460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.634473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.649166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.649180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.662017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.662031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.674910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.674924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.689694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.689709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.702512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.702526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.736 [2024-10-08 17:54:06.717157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.736 [2024-10-08 17:54:06.717172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.730160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.730175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.743133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.743147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.757574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.757592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.770518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.770532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.785445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.785459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.798415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.798429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.811517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.811532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.826016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.826030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.838925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.838939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.853114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.853128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.866034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.866048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.878713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.878727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.893469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.893483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.906547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.906561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.921586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.921600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.934862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.934876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.949694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.949708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.962749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.962763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:14.997 [2024-10-08 17:54:06.977166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:14.997 [2024-10-08 17:54:06.977181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:06.990585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:06.990599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.004902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.004916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.017948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.017966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.031355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.031370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.045242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.045256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.058664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.058678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.073154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.073169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.086104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.086119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.099550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.099565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.113056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.113070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.125577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.125591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.138537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.138551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.153192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.153207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.166038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.166052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.179236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.179250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.193186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.193201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.206315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.206330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.219160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.219174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.233427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.233441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.258 [2024-10-08 17:54:07.246657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.258 [2024-10-08 17:54:07.246671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.261322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.261336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.274365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.274383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.287106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.287120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.301540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.301554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.314519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.314533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.329201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.329215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.342146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.342160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.354678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.354692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.369701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.369716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 19017.00 IOPS, 148.57 MiB/s [2024-10-08T15:54:07.511Z] [2024-10-08 17:54:07.382646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.382660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.397941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.397956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.411300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.411314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.425598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.425612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.438574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.438588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.453411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.453426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.466477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.466491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.481058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.481073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.494117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.494132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.519 [2024-10-08 17:54:07.507162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.519 [2024-10-08 17:54:07.507177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.521553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.521568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.535029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.535043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.549495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.549510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.562823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.562837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.577299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.577314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.590242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.590257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.602917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.602931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.617700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.617714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.631080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.631095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.645621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.645636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.658914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.658929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.673059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.673073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.780 [2024-10-08 17:54:07.686142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.780 [2024-10-08 17:54:07.686157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.699128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.699143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.713586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.713601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.726756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.726770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.741046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.741061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.754065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.754080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:15.781 [2024-10-08 17:54:07.767141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:15.781 [2024-10-08 17:54:07.767156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.781371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.781386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.794450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.794464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.809730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.809745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.823031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.823046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.837114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.837128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.850158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.850172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.863466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.863481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.877483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.877498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.890511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.890525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.905085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.905099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.918176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.918191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.930538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.930552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.944955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.944969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.957553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.957568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.970492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.970506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.985794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.985808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:07.998902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:07.998916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:08.013363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:08.013377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.042 [2024-10-08 17:54:08.026613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.042 [2024-10-08 17:54:08.026627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.041392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.041407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.054489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.054503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.069511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.069525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.082573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.082587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.097827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.097842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.110710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.110724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.125000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.125015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.138117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.138131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.151590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.151604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.165643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.165658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.178780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.178795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.193478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.193493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.206685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.206700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.221533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.221548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.234748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.234763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.250160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.250175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.263044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.263059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.277056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.277071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.303 [2024-10-08 17:54:08.289758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.303 [2024-10-08 17:54:08.289773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.302998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.303016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.317209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.317223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.330493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.330507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.345482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.345497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.358656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.358669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.373146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.373161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 19016.67 IOPS, 148.57 MiB/s [2024-10-08T15:54:08.556Z] [2024-10-08 17:54:08.386300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.386314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.399039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.399053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.413472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.413487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.426051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.426066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.438710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.438724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.453755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.453770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.467159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.467173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.481500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.481515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.494477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.494491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.509473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.509487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.522191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.522206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.535584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.535599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.564 [2024-10-08 17:54:08.549377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.564 [2024-10-08 17:54:08.549392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.824 [2024-10-08 17:54:08.562518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.824 [2024-10-08 17:54:08.562536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.576805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.576821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.589665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.589680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.602188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.602203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.615182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.615197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.629265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.629280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.642170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.642185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.655116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.655130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.669410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.669425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.682468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.682483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.697308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.697323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.710349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.710364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.723406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.723421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.737305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.737320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.750532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.750546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.765571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.765586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.778565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.778580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.793079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.793094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:16.825 [2024-10-08 17:54:08.806200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:16.825 [2024-10-08 17:54:08.806215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.819079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.819098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.833508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.833522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.846423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.846438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.859330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.859345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.873298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.873313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.886158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.886172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.899152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.899166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.913021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.913036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.925978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.925992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.939610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.939624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.953477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.953492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.966689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.966703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.981298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.981313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:08.994408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:08.994422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.007387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.007401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.021625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.021639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.034621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.034635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.049521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.049535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.062846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.062860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.087 [2024-10-08 17:54:09.077657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.087 [2024-10-08 17:54:09.077672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.090907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.090921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.105907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.105922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.118991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.119005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.133169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.133184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.146388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.146402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.159015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.159029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.173169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.173183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.186253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.186267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.199121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.199135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.213595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.213610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.226504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.226518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.241501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.241517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.254742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.254756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.269358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.269373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.282504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.282518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.297040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.297055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.310372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.310387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.323026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.323041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.349 [2024-10-08 17:54:09.337493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.349 [2024-10-08 17:54:09.337508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.350633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.350648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.365837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.365853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.379292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.379307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 19012.75 IOPS, 148.54 MiB/s [2024-10-08T15:54:09.603Z] [2024-10-08 17:54:09.393513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.393528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.406561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.406576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.421632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.611 [2024-10-08 17:54:09.421647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.611 [2024-10-08 17:54:09.434861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.434875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.449493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.449508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.462604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.462618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.477170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.477185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.490115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.490129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.503181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.503196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.517682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.517697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.530859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.530873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.545571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.545585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.558515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.558530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.573006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.573020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.585894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.585910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.612 [2024-10-08 17:54:09.599372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.612 [2024-10-08 17:54:09.599386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.613465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.613480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.626744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.626759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.641205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.641219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.654326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.654341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.667243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.667258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.681464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.681479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.694682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.694696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.709398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.709413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.722378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.722393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.735448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.735463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.749489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.749504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.762655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.762670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.777373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.777388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.790617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.790631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.805281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.805295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.818509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.818523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.833749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.833763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.846970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.846995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:17.873 [2024-10-08 17:54:09.861935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:17.873 [2024-10-08 17:54:09.861951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.875136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.875151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.889310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.889324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.902017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.902031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.915089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.915103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.929411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.929425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.942347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.942361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.955203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.955218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.969304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.969319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.981983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.981998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:09.994837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:09.994852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.009489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.009505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.022727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.022741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.037488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.037502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.050372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.050387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.063112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.063126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.077127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.077142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.090116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.090130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.103584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.103603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.135 [2024-10-08 17:54:10.117681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.135 [2024-10-08 17:54:10.117695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.131020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.131035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.145019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.145033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.158154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.158169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.171186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.171201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.185343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.185358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.198498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.198512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.213497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.213512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.226770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.226785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.241196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.241211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.254188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.254203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.267082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.267096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.281262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.281277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.294716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.396 [2024-10-08 17:54:10.294730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.396 [2024-10-08 17:54:10.309758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.309772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.397 [2024-10-08 17:54:10.322937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.322951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.397 [2024-10-08 17:54:10.337676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.337690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.397 [2024-10-08 17:54:10.350699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.350714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.397 [2024-10-08 17:54:10.365434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.365455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.397 [2024-10-08 17:54:10.378572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.397 [2024-10-08 17:54:10.378586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.657 19014.00 IOPS, 148.55 MiB/s [2024-10-08T15:54:10.649Z] [2024-10-08 17:54:10.390167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.657 [2024-10-08 17:54:10.390181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.657 00:38:18.657 Latency(us) 00:38:18.657 [2024-10-08T15:54:10.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.657 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:18.657 Nvme1n1 : 5.01 19019.31 148.59 0.00 0.00 6723.88 2717.01 11359.57 00:38:18.657 [2024-10-08T15:54:10.649Z] =================================================================================================================== 00:38:18.657 [2024-10-08T15:54:10.649Z] Total : 19019.31 148.59 0.00 0.00 6723.88 2717.01 11359.57 00:38:18.657 [2024-10-08 17:54:10.402166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.402180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.414170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.414183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.426167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.426178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.438164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.438175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.450162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.450172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.462160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.462169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.474165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.474176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.486163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.486173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.498161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.498170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 [2024-10-08 17:54:10.510160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.658 [2024-10-08 17:54:10.510169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (634051) - No such process 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 634051 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:18.658 delay0 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.658 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:18.919 [2024-10-08 17:54:10.664560] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:25.508 Initializing NVMe Controllers 00:38:25.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:25.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:25.508 Initialization complete. Launching workers. 00:38:25.508 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3849 00:38:25.508 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4132, failed to submit 37 00:38:25.508 success 4042, unsuccessful 90, failed 0 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.508 rmmod nvme_tcp 00:38:25.508 rmmod nvme_fabrics 00:38:25.508 rmmod nvme_keyring 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 631782 ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 631782 ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631782' 00:38:25.508 killing process with pid 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 631782 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.508 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:28.101 00:38:28.101 real 0m33.662s 00:38:28.101 user 0m42.998s 00:38:28.101 sys 0m12.054s 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:28.101 ************************************ 00:38:28.101 END TEST nvmf_zcopy 00:38:28.101 ************************************ 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:28.101 ************************************ 00:38:28.101 START TEST nvmf_nmic 00:38:28.101 ************************************ 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:28.101 * Looking for test storage... 00:38:28.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.101 --rc genhtml_branch_coverage=1 00:38:28.101 --rc genhtml_function_coverage=1 00:38:28.101 --rc genhtml_legend=1 00:38:28.101 --rc geninfo_all_blocks=1 00:38:28.101 --rc geninfo_unexecuted_blocks=1 00:38:28.101 00:38:28.101 ' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.101 --rc genhtml_branch_coverage=1 00:38:28.101 --rc genhtml_function_coverage=1 00:38:28.101 --rc genhtml_legend=1 00:38:28.101 --rc geninfo_all_blocks=1 00:38:28.101 --rc geninfo_unexecuted_blocks=1 00:38:28.101 00:38:28.101 ' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.101 --rc genhtml_branch_coverage=1 00:38:28.101 --rc genhtml_function_coverage=1 00:38:28.101 --rc genhtml_legend=1 00:38:28.101 --rc geninfo_all_blocks=1 00:38:28.101 --rc geninfo_unexecuted_blocks=1 00:38:28.101 00:38:28.101 ' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:28.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.101 --rc genhtml_branch_coverage=1 00:38:28.101 --rc genhtml_function_coverage=1 00:38:28.101 --rc genhtml_legend=1 00:38:28.101 --rc geninfo_all_blocks=1 00:38:28.101 --rc geninfo_unexecuted_blocks=1 00:38:28.101 00:38:28.101 ' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.101 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.102 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:36.249 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:36.249 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:36.249 Found net devices under 0000:31:00.0: cvl_0_0 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.249 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:36.250 Found net devices under 0000:31:00.1: cvl_0_1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:36.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:38:36.250 00:38:36.250 --- 10.0.0.2 ping statistics --- 00:38:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.250 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:36.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:38:36.250 00:38:36.250 --- 10.0.0.1 ping statistics --- 00:38:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.250 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=640954 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 640954 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 640954 ']' 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:36.250 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.250 [2024-10-08 17:54:27.600080] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:36.250 [2024-10-08 17:54:27.601224] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:38:36.250 [2024-10-08 17:54:27.601274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.250 [2024-10-08 17:54:27.691140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:36.250 [2024-10-08 17:54:27.789515] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.250 [2024-10-08 17:54:27.789580] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.250 [2024-10-08 17:54:27.789592] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.250 [2024-10-08 17:54:27.789602] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.250 [2024-10-08 17:54:27.789617] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.250 [2024-10-08 17:54:27.792133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.250 [2024-10-08 17:54:27.792294] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:36.250 [2024-10-08 17:54:27.792452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:38:36.250 [2024-10-08 17:54:27.792456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.250 [2024-10-08 17:54:27.889940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.250 [2024-10-08 17:54:27.890706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:36.250 [2024-10-08 17:54:27.891103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:36.250 [2024-10-08 17:54:27.891458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:36.250 [2024-10-08 17:54:27.891528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.511 [2024-10-08 17:54:28.461489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.511 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.512 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:36.512 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.512 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 Malloc0 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 [2024-10-08 17:54:28.545770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:36.773 test case1: single bdev can't be used in multiple subsystems 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 [2024-10-08 17:54:28.581089] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:36.773 [2024-10-08 17:54:28.581119] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:36.773 [2024-10-08 17:54:28.581132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:36.773 request: 00:38:36.773 { 00:38:36.773 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:36.773 "namespace": { 00:38:36.773 "bdev_name": "Malloc0", 00:38:36.773 "no_auto_visible": false 00:38:36.773 }, 00:38:36.773 "method": "nvmf_subsystem_add_ns", 00:38:36.773 "req_id": 1 00:38:36.773 } 00:38:36.773 Got JSON-RPC error response 00:38:36.773 response: 00:38:36.773 { 00:38:36.773 "code": -32602, 00:38:36.773 "message": "Invalid parameters" 00:38:36.773 } 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:36.773 Adding namespace failed - expected result. 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:36.773 test case2: host connect to nvmf target in multiple paths 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:36.773 [2024-10-08 17:54:28.593260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:36.773 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.774 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:37.347 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:37.608 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:37.608 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:38:37.608 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:37.608 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:38:37.608 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:38:40.157 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:40.157 [global] 00:38:40.157 thread=1 00:38:40.157 invalidate=1 00:38:40.157 rw=write 00:38:40.157 time_based=1 00:38:40.157 runtime=1 00:38:40.157 ioengine=libaio 00:38:40.157 direct=1 00:38:40.157 bs=4096 00:38:40.157 iodepth=1 00:38:40.157 norandommap=0 00:38:40.157 numjobs=1 00:38:40.157 00:38:40.157 verify_dump=1 00:38:40.157 verify_backlog=512 00:38:40.157 verify_state_save=0 00:38:40.157 do_verify=1 00:38:40.157 verify=crc32c-intel 00:38:40.157 [job0] 00:38:40.157 filename=/dev/nvme0n1 00:38:40.157 Could not set queue depth (nvme0n1) 00:38:40.157 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:40.157 fio-3.35 00:38:40.157 Starting 1 thread 00:38:41.542 00:38:41.542 job0: (groupid=0, jobs=1): err= 0: pid=642098: Tue Oct 8 17:54:33 2024 00:38:41.542 read: IOPS=69, BW=280KiB/s (287kB/s)(288KiB/1029msec) 00:38:41.542 slat (nsec): min=5895, max=12133, avg=8114.85, stdev=1543.89 00:38:41.542 clat (usec): min=444, max=41043, avg=11317.42, stdev=17887.96 00:38:41.542 lat (usec): min=452, max=41053, avg=11325.54, stdev=17889.17 00:38:41.542 clat percentiles (usec): 00:38:41.542 | 1.00th=[ 445], 5.00th=[ 578], 10.00th=[ 652], 20.00th=[ 660], 00:38:41.542 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[ 701], 60.00th=[ 734], 00:38:41.542 | 70.00th=[ 758], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:41.542 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:41.542 | 99.99th=[41157] 00:38:41.542 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:38:41.542 slat (usec): min=6, max=30796, avg=71.75, stdev=1360.52 00:38:41.542 clat (usec): min=233, max=614, avg=341.16, stdev=48.45 00:38:41.542 lat (usec): min=240, max=31137, avg=412.92, stdev=1361.53 00:38:41.542 clat percentiles (usec): 00:38:41.542 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 293], 20.00th=[ 314], 00:38:41.542 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:38:41.542 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 437], 00:38:41.542 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 619], 99.95th=[ 619], 00:38:41.542 | 99.99th=[ 619] 00:38:41.542 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:41.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:41.542 lat (usec) : 250=2.74%, 500=84.08%, 750=8.90%, 1000=1.03% 00:38:41.542 lat (msec) : 50=3.25% 00:38:41.542 cpu : usr=0.19%, sys=0.68%, ctx=587, majf=0, minf=1 00:38:41.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.542 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:41.542 00:38:41.542 Run status group 0 (all jobs): 00:38:41.542 READ: bw=280KiB/s (287kB/s), 280KiB/s-280KiB/s (287kB/s-287kB/s), io=288KiB (295kB), run=1029-1029msec 00:38:41.542 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:38:41.542 00:38:41.542 Disk stats (read/write): 00:38:41.543 nvme0n1: ios=120/512, merge=0/0, ticks=1650/172, in_queue=1822, util=98.80% 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:41.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.543 rmmod nvme_tcp 00:38:41.543 rmmod nvme_fabrics 00:38:41.543 rmmod nvme_keyring 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 640954 ']' 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 640954 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 640954 ']' 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 640954 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 640954 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 640954' 00:38:41.543 killing process with pid 640954 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 640954 00:38:41.543 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 640954 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.803 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.715 00:38:43.715 real 0m15.985s 00:38:43.715 user 0m36.466s 00:38:43.715 sys 0m7.494s 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:43.715 ************************************ 00:38:43.715 END TEST nvmf_nmic 00:38:43.715 ************************************ 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:43.715 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:43.976 ************************************ 00:38:43.976 START TEST nvmf_fio_target 00:38:43.976 ************************************ 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:43.976 * Looking for test storage... 00:38:43.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.976 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.977 --rc genhtml_branch_coverage=1 00:38:43.977 --rc genhtml_function_coverage=1 00:38:43.977 --rc genhtml_legend=1 00:38:43.977 --rc geninfo_all_blocks=1 00:38:43.977 --rc geninfo_unexecuted_blocks=1 00:38:43.977 00:38:43.977 ' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.977 --rc genhtml_branch_coverage=1 00:38:43.977 --rc genhtml_function_coverage=1 00:38:43.977 --rc genhtml_legend=1 00:38:43.977 --rc geninfo_all_blocks=1 00:38:43.977 --rc geninfo_unexecuted_blocks=1 00:38:43.977 00:38:43.977 ' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.977 --rc genhtml_branch_coverage=1 00:38:43.977 --rc genhtml_function_coverage=1 00:38:43.977 --rc genhtml_legend=1 00:38:43.977 --rc geninfo_all_blocks=1 00:38:43.977 --rc geninfo_unexecuted_blocks=1 00:38:43.977 00:38:43.977 ' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.977 --rc genhtml_branch_coverage=1 00:38:43.977 --rc genhtml_function_coverage=1 00:38:43.977 --rc genhtml_legend=1 00:38:43.977 --rc geninfo_all_blocks=1 00:38:43.977 --rc geninfo_unexecuted_blocks=1 00:38:43.977 00:38:43.977 ' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:43.977 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.978 17:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.128 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:52.128 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:52.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:52.129 Found net devices under 0000:31:00.0: cvl_0_0 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:52.129 Found net devices under 0000:31:00.1: cvl_0_1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:52.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:38:52.129 00:38:52.129 --- 10.0.0.2 ping statistics --- 00:38:52.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.129 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:38:52.129 00:38:52.129 --- 10.0.0.1 ping statistics --- 00:38:52.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.129 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=646507 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 646507 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 646507 ']' 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:52.129 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:52.129 [2024-10-08 17:54:43.553956] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:52.129 [2024-10-08 17:54:43.555035] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:38:52.129 [2024-10-08 17:54:43.555080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.129 [2024-10-08 17:54:43.641161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:52.129 [2024-10-08 17:54:43.707171] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.129 [2024-10-08 17:54:43.707208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.129 [2024-10-08 17:54:43.707216] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.129 [2024-10-08 17:54:43.707223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.129 [2024-10-08 17:54:43.707229] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.129 [2024-10-08 17:54:43.709030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.130 [2024-10-08 17:54:43.709188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.130 [2024-10-08 17:54:43.709328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.130 [2024-10-08 17:54:43.709328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.130 [2024-10-08 17:54:43.775872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:52.130 [2024-10-08 17:54:43.776796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:52.130 [2024-10-08 17:54:43.776954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:52.130 [2024-10-08 17:54:43.777695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:52.130 [2024-10-08 17:54:43.777755] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:52.390 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:52.390 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:38:52.390 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:52.390 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:52.390 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:52.651 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.651 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:52.651 [2024-10-08 17:54:44.550257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.651 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:52.912 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:52.912 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.174 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:53.174 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.435 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:53.435 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.435 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:53.435 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:53.696 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.957 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:53.957 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.957 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:53.957 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:54.217 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:54.217 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:54.478 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:54.478 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:54.478 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:54.738 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:54.738 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:54.999 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.999 [2024-10-08 17:54:46.962052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.259 17:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:55.259 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:55.520 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:38:56.092 17:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:38:58.003 17:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:58.003 [global] 00:38:58.003 thread=1 00:38:58.003 invalidate=1 00:38:58.003 rw=write 00:38:58.003 time_based=1 00:38:58.003 runtime=1 00:38:58.003 ioengine=libaio 00:38:58.003 direct=1 00:38:58.003 bs=4096 00:38:58.003 iodepth=1 00:38:58.003 norandommap=0 00:38:58.003 numjobs=1 00:38:58.003 00:38:58.003 verify_dump=1 00:38:58.003 verify_backlog=512 00:38:58.003 verify_state_save=0 00:38:58.003 do_verify=1 00:38:58.003 verify=crc32c-intel 00:38:58.003 [job0] 00:38:58.003 filename=/dev/nvme0n1 00:38:58.003 [job1] 00:38:58.003 filename=/dev/nvme0n2 00:38:58.003 [job2] 00:38:58.003 filename=/dev/nvme0n3 00:38:58.003 [job3] 00:38:58.003 filename=/dev/nvme0n4 00:38:58.003 Could not set queue depth (nvme0n1) 00:38:58.003 Could not set queue depth (nvme0n2) 00:38:58.003 Could not set queue depth (nvme0n3) 00:38:58.003 Could not set queue depth (nvme0n4) 00:38:58.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.571 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.571 fio-3.35 00:38:58.571 Starting 4 threads 00:38:59.958 00:38:59.958 job0: (groupid=0, jobs=1): err= 0: pid=648086: Tue Oct 8 17:54:51 2024 00:38:59.958 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:59.958 slat (nsec): min=8785, max=66606, avg=23754.27, stdev=7828.93 00:38:59.958 clat (usec): min=487, max=1538, avg=918.01, stdev=101.47 00:38:59.958 lat (usec): min=498, max=1566, avg=941.76, stdev=105.22 00:38:59.958 clat percentiles (usec): 00:38:59.958 | 1.00th=[ 619], 5.00th=[ 750], 10.00th=[ 791], 20.00th=[ 840], 00:38:59.958 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 938], 60.00th=[ 955], 00:38:59.958 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:38:59.958 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1532], 99.95th=[ 1532], 00:38:59.958 | 99.99th=[ 1532] 00:38:59.958 write: IOPS=847, BW=3389KiB/s (3470kB/s)(3392KiB/1001msec); 0 zone resets 00:38:59.958 slat (nsec): min=9629, max=62448, avg=28779.67, stdev=11548.66 00:38:59.958 clat (usec): min=187, max=1364, avg=571.31, stdev=146.19 00:38:59.958 lat (usec): min=200, max=1377, avg=600.09, stdev=151.23 00:38:59.958 clat percentiles (usec): 00:38:59.958 | 1.00th=[ 237], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[ 453], 00:38:59.958 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 594], 00:38:59.959 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 799], 00:38:59.959 | 99.00th=[ 938], 99.50th=[ 1029], 99.90th=[ 1369], 99.95th=[ 1369], 00:38:59.959 | 99.99th=[ 1369] 00:38:59.959 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:59.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:59.959 lat (usec) : 250=0.74%, 500=20.00%, 750=36.62%, 1000=35.81% 00:38:59.959 lat (msec) : 2=6.84% 00:38:59.959 cpu : usr=3.00%, sys=4.30%, ctx=1363, majf=0, minf=1 00:38:59.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.959 issued rwts: total=512,848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:59.959 job1: (groupid=0, jobs=1): err= 0: pid=648087: Tue Oct 8 17:54:51 2024 00:38:59.959 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:38:59.959 slat (nsec): min=26993, max=28024, avg=27358.35, stdev=313.91 00:38:59.959 clat (usec): min=1015, max=42034, avg=39404.53, stdev=9897.94 00:38:59.959 lat (usec): min=1042, max=42061, avg=39431.89, stdev=9898.01 00:38:59.959 clat percentiles (usec): 00:38:59.959 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41157], 00:38:59.959 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:38:59.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:59.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:59.959 | 99.99th=[42206] 00:38:59.959 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:38:59.959 slat (usec): min=9, max=100, avg=31.76, stdev=11.95 00:38:59.959 clat (usec): min=273, max=1048, avg=625.36, stdev=130.76 00:38:59.959 lat (usec): min=285, max=1086, avg=657.12, stdev=136.83 00:38:59.959 clat percentiles (usec): 00:38:59.960 | 1.00th=[ 326], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 510], 00:38:59.960 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676], 00:38:59.960 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 816], 00:38:59.960 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 1057], 99.95th=[ 1057], 00:38:59.960 | 99.99th=[ 1057] 00:38:59.960 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:59.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:59.960 lat (usec) : 500=18.34%, 750=61.44%, 1000=16.82% 00:38:59.960 lat (msec) : 2=0.38%, 50=3.02% 00:38:59.960 cpu : usr=0.89%, sys=2.18%, ctx=532, majf=0, minf=1 00:38:59.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.960 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:59.960 job2: (groupid=0, jobs=1): err= 0: pid=648088: Tue Oct 8 17:54:51 2024 00:38:59.960 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1028msec) 00:38:59.960 slat (nsec): min=25234, max=26040, avg=25599.12, stdev=281.64 00:38:59.960 clat (usec): min=1111, max=42064, avg=39431.09, stdev=9880.95 00:38:59.960 lat (usec): min=1136, max=42089, avg=39456.69, stdev=9880.90 00:38:59.960 clat percentiles (usec): 00:38:59.960 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[40633], 20.00th=[41681], 00:38:59.960 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:38:59.960 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:59.960 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:59.960 | 99.99th=[42206] 00:38:59.960 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:38:59.960 slat (nsec): min=10234, max=61841, avg=30526.18, stdev=9655.78 00:38:59.960 clat (usec): min=301, max=1059, avg=659.21, stdev=123.27 00:38:59.960 lat (usec): min=312, max=1071, avg=689.74, stdev=127.24 00:38:59.960 clat percentiles (usec): 00:38:59.960 | 1.00th=[ 371], 5.00th=[ 424], 10.00th=[ 490], 20.00th=[ 562], 00:38:59.960 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:38:59.961 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 840], 00:38:59.961 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:38:59.961 | 99.99th=[ 1057] 00:38:59.961 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:59.961 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:59.961 lat (usec) : 500=11.53%, 750=63.89%, 1000=21.17% 00:38:59.961 lat (msec) : 2=0.38%, 50=3.02% 00:38:59.961 cpu : usr=0.58%, sys=1.66%, ctx=532, majf=0, minf=1 00:38:59.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.961 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:59.961 job3: (groupid=0, jobs=1): err= 0: pid=648089: Tue Oct 8 17:54:51 2024 00:38:59.961 read: IOPS=15, BW=62.8KiB/s (64.3kB/s)(64.0KiB/1019msec) 00:38:59.961 slat (nsec): min=25716, max=30839, avg=26328.50, stdev=1236.29 00:38:59.961 clat (usec): min=40857, max=42798, avg=41653.04, stdev=545.79 00:38:59.961 lat (usec): min=40883, max=42829, avg=41679.37, stdev=546.39 00:38:59.961 clat percentiles (usec): 00:38:59.961 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:59.961 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:59.961 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:38:59.961 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:59.961 | 99.99th=[42730] 00:38:59.961 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:38:59.961 slat (nsec): min=9321, max=51913, avg=31616.82, stdev=7493.18 00:38:59.961 clat (usec): min=239, max=997, avg=648.60, stdev=131.30 00:38:59.961 lat (usec): min=251, max=1030, avg=680.21, stdev=133.51 00:38:59.961 clat percentiles (usec): 00:38:59.961 | 1.00th=[ 310], 5.00th=[ 416], 10.00th=[ 474], 20.00th=[ 537], 00:38:59.961 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:38:59.961 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:38:59.962 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:38:59.962 | 99.99th=[ 996] 00:38:59.962 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:59.962 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:59.962 lat (usec) : 250=0.38%, 500=12.12%, 750=63.64%, 1000=20.83% 00:38:59.962 lat (msec) : 50=3.03% 00:38:59.962 cpu : usr=1.18%, sys=1.87%, ctx=529, majf=0, minf=2 00:38:59.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.962 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:59.962 00:38:59.962 Run status group 0 (all jobs): 00:38:59.962 READ: bw=2187KiB/s (2239kB/s), 62.8KiB/s-2046KiB/s (64.3kB/s-2095kB/s), io=2248KiB (2302kB), run=1001-1028msec 00:38:59.962 WRITE: bw=9276KiB/s (9499kB/s), 1992KiB/s-3389KiB/s (2040kB/s-3470kB/s), io=9536KiB (9765kB), run=1001-1028msec 00:38:59.962 00:38:59.962 Disk stats (read/write): 00:38:59.962 nvme0n1: ios=534/557, merge=0/0, ticks=1308/265, in_queue=1573, util=83.97% 00:38:59.962 nvme0n2: ios=58/512, merge=0/0, ticks=554/262, in_queue=816, util=90.91% 00:38:59.962 nvme0n3: ios=34/512, merge=0/0, ticks=1345/334, in_queue=1679, util=92.06% 00:38:59.962 nvme0n4: ios=68/512, merge=0/0, ticks=560/258, in_queue=818, util=97.11% 00:38:59.962 17:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:59.962 [global] 00:38:59.962 thread=1 00:38:59.962 invalidate=1 00:38:59.962 rw=randwrite 00:38:59.962 time_based=1 00:38:59.962 runtime=1 00:38:59.962 ioengine=libaio 00:38:59.962 direct=1 00:38:59.962 bs=4096 00:38:59.962 iodepth=1 00:38:59.962 norandommap=0 00:38:59.962 numjobs=1 00:38:59.962 00:38:59.962 verify_dump=1 00:38:59.962 verify_backlog=512 00:38:59.962 verify_state_save=0 00:38:59.962 do_verify=1 00:38:59.962 verify=crc32c-intel 00:38:59.962 [job0] 00:38:59.962 filename=/dev/nvme0n1 00:38:59.962 [job1] 00:38:59.962 filename=/dev/nvme0n2 00:38:59.962 [job2] 00:38:59.962 filename=/dev/nvme0n3 00:38:59.962 [job3] 00:38:59.962 filename=/dev/nvme0n4 00:38:59.962 Could not set queue depth (nvme0n1) 00:38:59.962 Could not set queue depth (nvme0n2) 00:38:59.962 Could not set queue depth (nvme0n3) 00:38:59.962 Could not set queue depth (nvme0n4) 00:38:59.962 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:59.962 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:59.962 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:59.962 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:59.962 fio-3.35 00:38:59.962 Starting 4 threads 00:39:01.349 00:39:01.349 job0: (groupid=0, jobs=1): err= 0: pid=648559: Tue Oct 8 17:54:53 2024 00:39:01.349 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:01.349 slat (nsec): min=26881, max=45295, avg=27785.43, stdev=2218.73 00:39:01.349 clat (usec): min=643, max=1293, avg=975.27, stdev=93.58 00:39:01.349 lat (usec): min=671, max=1321, avg=1003.05, stdev=93.25 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 914], 00:39:01.349 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:39:01.349 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1123], 00:39:01.349 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1287], 00:39:01.349 | 99.99th=[ 1287] 00:39:01.349 write: IOPS=752, BW=3009KiB/s (3081kB/s)(3012KiB/1001msec); 0 zone resets 00:39:01.349 slat (nsec): min=9235, max=64938, avg=31860.00, stdev=9165.73 00:39:01.349 clat (usec): min=239, max=993, avg=600.45, stdev=120.26 00:39:01.349 lat (usec): min=273, max=1026, avg=632.31, stdev=123.22 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 318], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 490], 00:39:01.349 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:39:01.349 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:39:01.349 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:39:01.349 | 99.99th=[ 996] 00:39:01.349 bw ( KiB/s): min= 4096, max= 4096, per=29.93%, avg=4096.00, stdev= 0.00, samples=1 00:39:01.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:01.349 lat (usec) : 250=0.16%, 500=13.44%, 750=40.95%, 1000=28.22% 00:39:01.349 lat (msec) : 2=17.23% 00:39:01.349 cpu : usr=2.50%, sys=5.40%, ctx=1267, majf=0, minf=1 00:39:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 issued rwts: total=512,753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:01.349 job1: (groupid=0, jobs=1): err= 0: pid=648569: Tue Oct 8 17:54:53 2024 00:39:01.349 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:01.349 slat (nsec): min=7388, max=45327, avg=27309.39, stdev=2588.67 00:39:01.349 clat (usec): min=788, max=1392, avg=1127.65, stdev=81.58 00:39:01.349 lat (usec): min=816, max=1419, avg=1154.96, stdev=81.82 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 889], 5.00th=[ 988], 10.00th=[ 1037], 20.00th=[ 1074], 00:39:01.349 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:39:01.349 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:39:01.349 | 99.00th=[ 1319], 99.50th=[ 1352], 99.90th=[ 1385], 99.95th=[ 1385], 00:39:01.349 | 99.99th=[ 1385] 00:39:01.349 write: IOPS=773, BW=3093KiB/s (3167kB/s)(3096KiB/1001msec); 0 zone resets 00:39:01.349 slat (nsec): min=9338, max=53460, avg=22412.18, stdev=12091.31 00:39:01.349 clat (usec): min=239, max=1388, avg=494.82, stdev=151.31 00:39:01.349 lat (usec): min=256, max=1422, avg=517.23, stdev=160.02 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 326], 20.00th=[ 359], 00:39:01.349 | 30.00th=[ 375], 40.00th=[ 408], 50.00th=[ 465], 60.00th=[ 529], 00:39:01.349 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 750], 00:39:01.349 | 99.00th=[ 807], 99.50th=[ 865], 99.90th=[ 1385], 99.95th=[ 1385], 00:39:01.349 | 99.99th=[ 1385] 00:39:01.349 bw ( KiB/s): min= 4096, max= 4096, per=29.93%, avg=4096.00, stdev= 0.00, samples=1 00:39:01.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:01.349 lat (usec) : 250=0.16%, 500=33.36%, 750=23.41%, 1000=5.60% 00:39:01.349 lat (msec) : 2=37.48% 00:39:01.349 cpu : usr=2.60%, sys=3.70%, ctx=1288, majf=0, minf=1 00:39:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 issued rwts: total=512,774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:01.349 job2: (groupid=0, jobs=1): err= 0: pid=648584: Tue Oct 8 17:54:53 2024 00:39:01.349 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:01.349 slat (nsec): min=7145, max=57645, avg=24265.61, stdev=8125.38 00:39:01.349 clat (usec): min=595, max=41020, avg=1107.51, stdev=3308.53 00:39:01.349 lat (usec): min=622, max=41047, avg=1131.78, stdev=3308.81 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 627], 5.00th=[ 693], 10.00th=[ 717], 20.00th=[ 766], 00:39:01.349 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 840], 00:39:01.349 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 922], 00:39:01.349 | 99.00th=[ 1012], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:01.349 | 99.99th=[41157] 00:39:01.349 write: IOPS=873, BW=3493KiB/s (3576kB/s)(3496KiB/1001msec); 0 zone resets 00:39:01.349 slat (nsec): min=9362, max=55233, avg=27209.79, stdev=10865.51 00:39:01.349 clat (usec): min=149, max=3290, avg=442.75, stdev=165.41 00:39:01.349 lat (usec): min=183, max=3323, avg=469.96, stdev=168.37 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 255], 5.00th=[ 281], 10.00th=[ 314], 20.00th=[ 355], 00:39:01.349 | 30.00th=[ 379], 40.00th=[ 412], 50.00th=[ 449], 60.00th=[ 465], 00:39:01.349 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 603], 00:39:01.349 | 99.00th=[ 750], 99.50th=[ 824], 99.90th=[ 3294], 99.95th=[ 3294], 00:39:01.349 | 99.99th=[ 3294] 00:39:01.349 bw ( KiB/s): min= 4096, max= 4096, per=29.93%, avg=4096.00, stdev= 0.00, samples=1 00:39:01.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:01.349 lat (usec) : 250=0.51%, 500=49.64%, 750=18.76%, 1000=30.52% 00:39:01.349 lat (msec) : 2=0.14%, 4=0.14%, 50=0.29% 00:39:01.349 cpu : usr=1.70%, sys=3.90%, ctx=1387, majf=0, minf=1 00:39:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 issued rwts: total=512,874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:01.349 job3: (groupid=0, jobs=1): err= 0: pid=648588: Tue Oct 8 17:54:53 2024 00:39:01.349 read: IOPS=637, BW=2549KiB/s (2611kB/s)(2552KiB/1001msec) 00:39:01.349 slat (nsec): min=7337, max=51686, avg=25958.21, stdev=5169.40 00:39:01.349 clat (usec): min=309, max=1097, avg=813.34, stdev=144.40 00:39:01.349 lat (usec): min=335, max=1123, avg=839.30, stdev=144.71 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 412], 5.00th=[ 545], 10.00th=[ 619], 20.00th=[ 685], 00:39:01.349 | 30.00th=[ 750], 40.00th=[ 799], 50.00th=[ 840], 60.00th=[ 881], 00:39:01.349 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 971], 95.00th=[ 996], 00:39:01.349 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:39:01.349 | 99.99th=[ 1090] 00:39:01.349 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:39:01.349 slat (nsec): min=9671, max=55064, avg=31394.51, stdev=7899.57 00:39:01.349 clat (usec): min=183, max=705, avg=409.13, stdev=109.02 00:39:01.349 lat (usec): min=216, max=738, avg=440.53, stdev=109.89 00:39:01.349 clat percentiles (usec): 00:39:01.349 | 1.00th=[ 200], 5.00th=[ 255], 10.00th=[ 297], 20.00th=[ 318], 00:39:01.349 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 404], 60.00th=[ 437], 00:39:01.349 | 70.00th=[ 461], 80.00th=[ 498], 90.00th=[ 578], 95.00th=[ 611], 00:39:01.349 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[ 701], 99.95th=[ 709], 00:39:01.349 | 99.99th=[ 709] 00:39:01.349 bw ( KiB/s): min= 4096, max= 4096, per=29.93%, avg=4096.00, stdev= 0.00, samples=1 00:39:01.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:01.349 lat (usec) : 250=2.89%, 500=47.95%, 750=22.20%, 1000=25.45% 00:39:01.349 lat (msec) : 2=1.50% 00:39:01.349 cpu : usr=2.50%, sys=5.10%, ctx=1663, majf=0, minf=1 00:39:01.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.349 issued rwts: total=638,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:01.349 00:39:01.349 Run status group 0 (all jobs): 00:39:01.349 READ: bw=8687KiB/s (8896kB/s), 2046KiB/s-2549KiB/s (2095kB/s-2611kB/s), io=8696KiB (8905kB), run=1001-1001msec 00:39:01.349 WRITE: bw=13.4MiB/s (14.0MB/s), 3009KiB/s-4092KiB/s (3081kB/s-4190kB/s), io=13.4MiB (14.0MB), run=1001-1001msec 00:39:01.349 00:39:01.349 Disk stats (read/write): 00:39:01.349 nvme0n1: ios=534/513, merge=0/0, ticks=1324/239, in_queue=1563, util=86.07% 00:39:01.349 nvme0n2: ios=563/525, merge=0/0, ticks=677/214, in_queue=891, util=90.86% 00:39:01.349 nvme0n3: ios=534/563, merge=0/0, ticks=1424/243, in_queue=1667, util=93.56% 00:39:01.349 nvme0n4: ios=562/937, merge=0/0, ticks=486/369, in_queue=855, util=95.70% 00:39:01.349 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:01.349 [global] 00:39:01.349 thread=1 00:39:01.349 invalidate=1 00:39:01.349 rw=write 00:39:01.349 time_based=1 00:39:01.349 runtime=1 00:39:01.349 ioengine=libaio 00:39:01.349 direct=1 00:39:01.349 bs=4096 00:39:01.349 iodepth=128 00:39:01.349 norandommap=0 00:39:01.349 numjobs=1 00:39:01.349 00:39:01.349 verify_dump=1 00:39:01.349 verify_backlog=512 00:39:01.349 verify_state_save=0 00:39:01.349 do_verify=1 00:39:01.349 verify=crc32c-intel 00:39:01.349 [job0] 00:39:01.349 filename=/dev/nvme0n1 00:39:01.349 [job1] 00:39:01.349 filename=/dev/nvme0n2 00:39:01.349 [job2] 00:39:01.349 filename=/dev/nvme0n3 00:39:01.349 [job3] 00:39:01.349 filename=/dev/nvme0n4 00:39:01.349 Could not set queue depth (nvme0n1) 00:39:01.349 Could not set queue depth (nvme0n2) 00:39:01.349 Could not set queue depth (nvme0n3) 00:39:01.349 Could not set queue depth (nvme0n4) 00:39:01.609 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.609 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.609 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.609 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.609 fio-3.35 00:39:01.609 Starting 4 threads 00:39:02.992 00:39:02.992 job0: (groupid=0, jobs=1): err= 0: pid=648993: Tue Oct 8 17:54:54 2024 00:39:02.992 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:39:02.992 slat (nsec): min=925, max=7179.1k, avg=64682.02, stdev=440245.53 00:39:02.992 clat (usec): min=4126, max=18689, avg=8558.03, stdev=2051.74 00:39:02.992 lat (usec): min=4132, max=19558, avg=8622.71, stdev=2090.79 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7111], 00:39:02.992 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:39:02.992 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[10945], 95.00th=[12780], 00:39:02.992 | 99.00th=[14877], 99.50th=[15664], 99.90th=[18744], 99.95th=[18744], 00:39:02.992 | 99.99th=[18744] 00:39:02.992 write: IOPS=7445, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1004msec); 0 zone resets 00:39:02.992 slat (nsec): min=1615, max=5749.6k, avg=64240.58, stdev=365941.48 00:39:02.992 clat (usec): min=1617, max=25652, avg=8756.30, stdev=3313.13 00:39:02.992 lat (usec): min=1624, max=25658, avg=8820.54, stdev=3338.16 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 2737], 5.00th=[ 4490], 10.00th=[ 5735], 20.00th=[ 6915], 00:39:02.992 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8094], 00:39:02.992 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[14353], 95.00th=[15401], 00:39:02.992 | 99.00th=[18482], 99.50th=[19792], 99.90th=[22938], 99.95th=[22938], 00:39:02.992 | 99.99th=[25560] 00:39:02.992 bw ( KiB/s): min=28672, max=30104, per=30.06%, avg=29388.00, stdev=1012.58, samples=2 00:39:02.992 iops : min= 7168, max= 7526, avg=7347.00, stdev=253.14, samples=2 00:39:02.992 lat (msec) : 2=0.15%, 4=1.32%, 10=78.12%, 20=20.17%, 50=0.25% 00:39:02.992 cpu : usr=4.39%, sys=7.78%, ctx=622, majf=0, minf=1 00:39:02.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:02.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:02.992 issued rwts: total=7168,7475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:02.992 job1: (groupid=0, jobs=1): err= 0: pid=649007: Tue Oct 8 17:54:54 2024 00:39:02.992 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:39:02.992 slat (nsec): min=935, max=10377k, avg=71433.83, stdev=556492.86 00:39:02.992 clat (usec): min=2805, max=27470, avg=9330.13, stdev=3511.17 00:39:02.992 lat (usec): min=2806, max=27476, avg=9401.57, stdev=3544.96 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 4490], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6456], 00:39:02.992 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 9241], 00:39:02.992 | 70.00th=[10290], 80.00th=[11994], 90.00th=[14877], 95.00th=[16909], 00:39:02.992 | 99.00th=[18220], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:39:02.992 | 99.99th=[27395] 00:39:02.992 write: IOPS=6150, BW=24.0MiB/s (25.2MB/s)(24.1MiB/1003msec); 0 zone resets 00:39:02.992 slat (nsec): min=1624, max=11171k, avg=85579.26, stdev=541517.06 00:39:02.992 clat (usec): min=486, max=69509, avg=11251.58, stdev=10952.57 00:39:02.992 lat (usec): min=1193, max=69517, avg=11337.16, stdev=11025.64 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 2933], 5.00th=[ 3982], 10.00th=[ 4883], 20.00th=[ 5866], 00:39:02.992 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 8455], 00:39:02.992 | 70.00th=[11207], 80.00th=[13829], 90.00th=[17957], 95.00th=[24249], 00:39:02.992 | 99.00th=[63701], 99.50th=[64226], 99.90th=[69731], 99.95th=[69731], 00:39:02.992 | 99.99th=[69731] 00:39:02.992 bw ( KiB/s): min=24576, max=24576, per=25.14%, avg=24576.00, stdev= 0.00, samples=2 00:39:02.992 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:39:02.992 lat (usec) : 500=0.01% 00:39:02.992 lat (msec) : 2=0.08%, 4=3.02%, 10=64.42%, 20=28.77%, 50=2.01% 00:39:02.992 lat (msec) : 100=1.68% 00:39:02.992 cpu : usr=3.79%, sys=6.29%, ctx=509, majf=0, minf=2 00:39:02.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:02.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:02.992 issued rwts: total=6144,6169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:02.992 job2: (groupid=0, jobs=1): err= 0: pid=649030: Tue Oct 8 17:54:54 2024 00:39:02.992 read: IOPS=5738, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1006msec) 00:39:02.992 slat (nsec): min=960, max=12496k, avg=84786.73, stdev=619507.55 00:39:02.992 clat (usec): min=632, max=28690, avg=10867.56, stdev=4091.54 00:39:02.992 lat (usec): min=4688, max=28720, avg=10952.35, stdev=4139.44 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 5014], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7439], 00:39:02.992 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10945], 00:39:02.992 | 70.00th=[11863], 80.00th=[14353], 90.00th=[16909], 95.00th=[19006], 00:39:02.992 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822], 00:39:02.992 | 99.99th=[28705] 00:39:02.992 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:39:02.992 slat (nsec): min=1632, max=13695k, avg=75000.45, stdev=517498.50 00:39:02.992 clat (usec): min=862, max=37137, avg=10479.53, stdev=5182.91 00:39:02.992 lat (usec): min=902, max=37140, avg=10554.53, stdev=5226.71 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 3687], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 7242], 00:39:02.992 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 9503], 00:39:02.992 | 70.00th=[10290], 80.00th=[12780], 90.00th=[18744], 95.00th=[21627], 00:39:02.992 | 99.00th=[26346], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:39:02.992 | 99.99th=[36963] 00:39:02.992 bw ( KiB/s): min=18664, max=30488, per=25.14%, avg=24576.00, stdev=8360.83, samples=2 00:39:02.992 iops : min= 4666, max= 7622, avg=6144.00, stdev=2090.21, samples=2 00:39:02.992 lat (usec) : 750=0.01%, 1000=0.06% 00:39:02.992 lat (msec) : 2=0.03%, 4=0.55%, 10=58.62%, 20=35.13%, 50=5.60% 00:39:02.992 cpu : usr=3.08%, sys=5.77%, ctx=509, majf=0, minf=2 00:39:02.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:02.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:02.992 issued rwts: total=5773,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:02.992 job3: (groupid=0, jobs=1): err= 0: pid=649036: Tue Oct 8 17:54:54 2024 00:39:02.992 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:39:02.992 slat (nsec): min=947, max=15040k, avg=91783.57, stdev=702592.84 00:39:02.992 clat (usec): min=1854, max=57619, avg=12706.67, stdev=7122.27 00:39:02.992 lat (usec): min=1863, max=57626, avg=12798.46, stdev=7172.09 00:39:02.992 clat percentiles (usec): 00:39:02.992 | 1.00th=[ 2999], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7308], 00:39:02.992 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10945], 60.00th=[12518], 00:39:02.992 | 70.00th=[14877], 80.00th=[16450], 90.00th=[21103], 95.00th=[22152], 00:39:02.992 | 99.00th=[43254], 99.50th=[49546], 99.90th=[55837], 99.95th=[57410], 00:39:02.992 | 99.99th=[57410] 00:39:02.992 write: IOPS=4779, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1004msec); 0 zone resets 00:39:02.992 slat (nsec): min=1678, max=13927k, avg=104192.17, stdev=734392.81 00:39:02.992 clat (usec): min=1704, max=57513, avg=14257.73, stdev=10837.57 00:39:02.992 lat (usec): min=1766, max=57517, avg=14361.93, stdev=10913.80 00:39:02.993 clat percentiles (usec): 00:39:02.993 | 1.00th=[ 3163], 5.00th=[ 4490], 10.00th=[ 5735], 20.00th=[ 6587], 00:39:02.993 | 30.00th=[ 7373], 40.00th=[ 9372], 50.00th=[11338], 60.00th=[12780], 00:39:02.993 | 70.00th=[15795], 80.00th=[17433], 90.00th=[28181], 95.00th=[41157], 00:39:02.993 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:39:02.993 | 99.99th=[57410] 00:39:02.993 bw ( KiB/s): min=16896, max=20480, per=19.12%, avg=18688.00, stdev=2534.27, samples=2 00:39:02.993 iops : min= 4224, max= 5120, avg=4672.00, stdev=633.57, samples=2 00:39:02.993 lat (msec) : 2=0.43%, 4=1.37%, 10=43.46%, 20=41.37%, 50=11.59% 00:39:02.993 lat (msec) : 100=1.79% 00:39:02.993 cpu : usr=3.79%, sys=4.79%, ctx=366, majf=0, minf=1 00:39:02.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:02.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:02.993 issued rwts: total=4608,4799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:02.993 00:39:02.993 Run status group 0 (all jobs): 00:39:02.993 READ: bw=92.0MiB/s (96.5MB/s), 17.9MiB/s-27.9MiB/s (18.8MB/s-29.2MB/s), io=92.6MiB (97.0MB), run=1003-1006msec 00:39:02.993 WRITE: bw=95.5MiB/s (100MB/s), 18.7MiB/s-29.1MiB/s (19.6MB/s-30.5MB/s), io=96.0MiB (101MB), run=1003-1006msec 00:39:02.993 00:39:02.993 Disk stats (read/write): 00:39:02.993 nvme0n1: ios=6004/6144, merge=0/0, ticks=28964/30447, in_queue=59411, util=83.77% 00:39:02.993 nvme0n2: ios=4656/4935, merge=0/0, ticks=42579/58366, in_queue=100945, util=90.93% 00:39:02.993 nvme0n3: ios=4665/4833, merge=0/0, ticks=32803/33007, in_queue=65810, util=91.87% 00:39:02.993 nvme0n4: ios=3738/4091, merge=0/0, ticks=46105/51110, in_queue=97215, util=93.59% 00:39:02.993 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:02.993 [global] 00:39:02.993 thread=1 00:39:02.993 invalidate=1 00:39:02.993 rw=randwrite 00:39:02.993 time_based=1 00:39:02.993 runtime=1 00:39:02.993 ioengine=libaio 00:39:02.993 direct=1 00:39:02.993 bs=4096 00:39:02.993 iodepth=128 00:39:02.993 norandommap=0 00:39:02.993 numjobs=1 00:39:02.993 00:39:02.993 verify_dump=1 00:39:02.993 verify_backlog=512 00:39:02.993 verify_state_save=0 00:39:02.993 do_verify=1 00:39:02.993 verify=crc32c-intel 00:39:02.993 [job0] 00:39:02.993 filename=/dev/nvme0n1 00:39:02.993 [job1] 00:39:02.993 filename=/dev/nvme0n2 00:39:02.993 [job2] 00:39:02.993 filename=/dev/nvme0n3 00:39:02.993 [job3] 00:39:02.993 filename=/dev/nvme0n4 00:39:02.993 Could not set queue depth (nvme0n1) 00:39:02.993 Could not set queue depth (nvme0n2) 00:39:02.993 Could not set queue depth (nvme0n3) 00:39:02.993 Could not set queue depth (nvme0n4) 00:39:03.251 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:03.252 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:03.252 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:03.252 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:03.252 fio-3.35 00:39:03.252 Starting 4 threads 00:39:04.634 00:39:04.634 job0: (groupid=0, jobs=1): err= 0: pid=649422: Tue Oct 8 17:54:56 2024 00:39:04.634 read: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(21.0MiB/1046msec) 00:39:04.634 slat (nsec): min=887, max=15937k, avg=95332.03, stdev=755508.93 00:39:04.634 clat (usec): min=2033, max=74039, avg=12797.72, stdev=10197.03 00:39:04.634 lat (usec): min=2038, max=83298, avg=12893.05, stdev=10252.99 00:39:04.634 clat percentiles (usec): 00:39:04.634 | 1.00th=[ 3392], 5.00th=[ 5538], 10.00th=[ 6521], 20.00th=[ 7046], 00:39:04.634 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[10028], 60.00th=[10945], 00:39:04.634 | 70.00th=[12125], 80.00th=[16581], 90.00th=[21103], 95.00th=[28705], 00:39:04.634 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:39:04.634 | 99.99th=[73925] 00:39:04.634 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(22.0MiB/1046msec); 0 zone resets 00:39:04.634 slat (nsec): min=1495, max=17729k, avg=75553.06, stdev=519450.64 00:39:04.634 clat (usec): min=1734, max=36722, avg=11355.76, stdev=6349.15 00:39:04.634 lat (usec): min=1745, max=36732, avg=11431.31, stdev=6393.67 00:39:04.634 clat percentiles (usec): 00:39:04.634 | 1.00th=[ 3458], 5.00th=[ 4686], 10.00th=[ 5997], 20.00th=[ 6849], 00:39:04.634 | 30.00th=[ 7373], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[11469], 00:39:04.634 | 70.00th=[12125], 80.00th=[14353], 90.00th=[20841], 95.00th=[26346], 00:39:04.634 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:39:04.634 | 99.99th=[36963] 00:39:04.634 bw ( KiB/s): min=18448, max=26608, per=24.87%, avg=22528.00, stdev=5769.99, samples=2 00:39:04.634 iops : min= 4612, max= 6652, avg=5632.00, stdev=1442.50, samples=2 00:39:04.634 lat (msec) : 2=0.09%, 4=2.10%, 10=49.05%, 20=36.76%, 50=10.95% 00:39:04.634 lat (msec) : 100=1.05% 00:39:04.634 cpu : usr=2.58%, sys=5.36%, ctx=586, majf=0, minf=1 00:39:04.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:04.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:04.634 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:04.634 job1: (groupid=0, jobs=1): err= 0: pid=649438: Tue Oct 8 17:54:56 2024 00:39:04.634 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:39:04.634 slat (nsec): min=914, max=7235.7k, avg=63101.89, stdev=416733.98 00:39:04.634 clat (usec): min=4210, max=20264, avg=8150.51, stdev=2083.35 00:39:04.634 lat (usec): min=4215, max=20279, avg=8213.61, stdev=2115.29 00:39:04.634 clat percentiles (usec): 00:39:04.634 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6652], 00:39:04.634 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:39:04.634 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[12125], 00:39:04.634 | 99.00th=[15664], 99.50th=[17433], 99.90th=[19006], 99.95th=[19006], 00:39:04.634 | 99.99th=[20317] 00:39:04.634 write: IOPS=7896, BW=30.8MiB/s (32.3MB/s)(30.9MiB/1003msec); 0 zone resets 00:39:04.634 slat (nsec): min=1547, max=7235.4k, avg=58515.14, stdev=360732.51 00:39:04.634 clat (usec): min=770, max=34315, avg=8117.91, stdev=3773.81 00:39:04.634 lat (usec): min=780, max=34325, avg=8176.42, stdev=3801.30 00:39:04.634 clat percentiles (usec): 00:39:04.634 | 1.00th=[ 2900], 5.00th=[ 4621], 10.00th=[ 5538], 20.00th=[ 6063], 00:39:04.634 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:39:04.634 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10552], 95.00th=[14091], 00:39:04.634 | 99.00th=[28967], 99.50th=[31327], 99.90th=[33817], 99.95th=[34341], 00:39:04.634 | 99.99th=[34341] 00:39:04.634 bw ( KiB/s): min=30432, max=31912, per=34.41%, avg=31172.00, stdev=1046.52, samples=2 00:39:04.634 iops : min= 7608, max= 7978, avg=7793.00, stdev=261.63, samples=2 00:39:04.634 lat (usec) : 1000=0.02% 00:39:04.634 lat (msec) : 2=0.10%, 4=1.39%, 10=86.62%, 20=10.57%, 50=1.31% 00:39:04.634 cpu : usr=5.09%, sys=7.19%, ctx=603, majf=0, minf=1 00:39:04.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:04.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:04.634 issued rwts: total=7680,7920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:04.634 job2: (groupid=0, jobs=1): err= 0: pid=649456: Tue Oct 8 17:54:56 2024 00:39:04.634 read: IOPS=4249, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1004msec) 00:39:04.634 slat (nsec): min=907, max=15637k, avg=122156.80, stdev=906407.75 00:39:04.634 clat (usec): min=2968, max=83360, avg=14678.73, stdev=8802.95 00:39:04.634 lat (usec): min=3521, max=83367, avg=14800.89, stdev=8890.53 00:39:04.634 clat percentiles (usec): 00:39:04.634 | 1.00th=[ 3752], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8979], 00:39:04.634 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[12387], 60.00th=[13829], 00:39:04.634 | 70.00th=[17171], 80.00th=[19268], 90.00th=[22414], 95.00th=[25297], 00:39:04.634 | 99.00th=[65799], 99.50th=[72877], 99.90th=[79168], 99.95th=[83362], 00:39:04.634 | 99.99th=[83362] 00:39:04.634 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:39:04.634 slat (nsec): min=1563, max=12310k, avg=96329.13, stdev=542822.50 00:39:04.635 clat (usec): min=1107, max=83361, avg=14042.47, stdev=11626.48 00:39:04.635 lat (usec): min=1118, max=83373, avg=14138.80, stdev=11695.27 00:39:04.635 clat percentiles (usec): 00:39:04.635 | 1.00th=[ 2868], 5.00th=[ 5997], 10.00th=[ 7504], 20.00th=[ 8094], 00:39:04.635 | 30.00th=[ 8586], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[12125], 00:39:04.635 | 70.00th=[13698], 80.00th=[16188], 90.00th=[20841], 95.00th=[33817], 00:39:04.635 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[83362], 00:39:04.635 | 99.99th=[83362] 00:39:04.635 bw ( KiB/s): min=14832, max=22032, per=20.35%, avg=18432.00, stdev=5091.17, samples=2 00:39:04.635 iops : min= 3708, max= 5508, avg=4608.00, stdev=1272.79, samples=2 00:39:04.635 lat (msec) : 2=0.24%, 4=1.53%, 10=34.49%, 20=47.99%, 50=12.96% 00:39:04.635 lat (msec) : 100=2.78% 00:39:04.635 cpu : usr=3.99%, sys=4.29%, ctx=433, majf=0, minf=2 00:39:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:04.635 issued rwts: total=4266,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:04.635 job3: (groupid=0, jobs=1): err= 0: pid=649463: Tue Oct 8 17:54:56 2024 00:39:04.635 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:39:04.635 slat (nsec): min=979, max=14787k, avg=100289.43, stdev=762124.11 00:39:04.635 clat (usec): min=2772, max=39516, avg=13230.70, stdev=6501.67 00:39:04.635 lat (usec): min=2774, max=39524, avg=13330.99, stdev=6543.27 00:39:04.635 clat percentiles (usec): 00:39:04.635 | 1.00th=[ 4883], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7570], 00:39:04.635 | 30.00th=[ 7832], 40.00th=[ 9372], 50.00th=[11338], 60.00th=[13304], 00:39:04.635 | 70.00th=[15008], 80.00th=[19530], 90.00th=[23987], 95.00th=[25560], 00:39:04.635 | 99.00th=[29754], 99.50th=[35390], 99.90th=[35390], 99.95th=[39584], 00:39:04.635 | 99.99th=[39584] 00:39:04.635 write: IOPS=5502, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1005msec); 0 zone resets 00:39:04.635 slat (nsec): min=1609, max=13924k, avg=79913.71, stdev=585597.78 00:39:04.635 clat (usec): min=1210, max=33198, avg=10808.63, stdev=5171.56 00:39:04.635 lat (usec): min=1246, max=33208, avg=10888.54, stdev=5205.36 00:39:04.635 clat percentiles (usec): 00:39:04.635 | 1.00th=[ 2671], 5.00th=[ 4555], 10.00th=[ 5407], 20.00th=[ 6849], 00:39:04.635 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10945], 00:39:04.635 | 70.00th=[12518], 80.00th=[15270], 90.00th=[17695], 95.00th=[20579], 00:39:04.635 | 99.00th=[28443], 99.50th=[29492], 99.90th=[31065], 99.95th=[31065], 00:39:04.635 | 99.99th=[33162] 00:39:04.635 bw ( KiB/s): min=20480, max=22736, per=23.85%, avg=21608.00, stdev=1595.23, samples=2 00:39:04.635 iops : min= 5120, max= 5684, avg=5402.00, stdev=398.81, samples=2 00:39:04.635 lat (msec) : 2=0.34%, 4=1.61%, 10=46.77%, 20=39.21%, 50=12.08% 00:39:04.635 cpu : usr=3.59%, sys=5.98%, ctx=379, majf=0, minf=1 00:39:04.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:04.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:04.635 issued rwts: total=5120,5530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:04.635 00:39:04.635 Run status group 0 (all jobs): 00:39:04.635 READ: bw=83.8MiB/s (87.9MB/s), 16.6MiB/s-29.9MiB/s (17.4MB/s-31.4MB/s), io=87.7MiB (91.9MB), run=1003-1046msec 00:39:04.635 WRITE: bw=88.5MiB/s (92.8MB/s), 17.9MiB/s-30.8MiB/s (18.8MB/s-32.3MB/s), io=92.5MiB (97.0MB), run=1003-1046msec 00:39:04.635 00:39:04.635 Disk stats (read/write): 00:39:04.635 nvme0n1: ios=4658/4831, merge=0/0, ticks=45837/45542, in_queue=91379, util=87.27% 00:39:04.635 nvme0n2: ios=6178/6615, merge=0/0, ticks=31409/32829, in_queue=64238, util=97.66% 00:39:04.635 nvme0n3: ios=3174/3584, merge=0/0, ticks=44474/49077, in_queue=93551, util=99.16% 00:39:04.635 nvme0n4: ios=4128/4607, merge=0/0, ticks=39386/33976, in_queue=73362, util=90.48% 00:39:04.635 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:04.635 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=649675 00:39:04.635 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:04.635 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:04.635 [global] 00:39:04.635 thread=1 00:39:04.635 invalidate=1 00:39:04.635 rw=read 00:39:04.635 time_based=1 00:39:04.635 runtime=10 00:39:04.635 ioengine=libaio 00:39:04.635 direct=1 00:39:04.635 bs=4096 00:39:04.635 iodepth=1 00:39:04.635 norandommap=1 00:39:04.635 numjobs=1 00:39:04.635 00:39:04.635 [job0] 00:39:04.635 filename=/dev/nvme0n1 00:39:04.635 [job1] 00:39:04.635 filename=/dev/nvme0n2 00:39:04.635 [job2] 00:39:04.635 filename=/dev/nvme0n3 00:39:04.635 [job3] 00:39:04.635 filename=/dev/nvme0n4 00:39:04.894 Could not set queue depth (nvme0n1) 00:39:04.894 Could not set queue depth (nvme0n2) 00:39:04.894 Could not set queue depth (nvme0n3) 00:39:04.894 Could not set queue depth (nvme0n4) 00:39:05.153 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:05.153 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:05.153 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:05.153 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:05.153 fio-3.35 00:39:05.153 Starting 4 threads 00:39:07.705 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:07.965 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:07.965 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=434176, buflen=4096 00:39:07.965 fio: pid=649993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:07.965 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6496256, buflen=4096 00:39:07.965 fio: pid=649980, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:07.965 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:07.965 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:08.227 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11276288, buflen=4096 00:39:08.227 fio: pid=649924, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:08.227 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.227 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:08.488 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11845632, buflen=4096 00:39:08.488 fio: pid=649943, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:08.488 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.488 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:08.489 00:39:08.489 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649924: Tue Oct 8 17:55:00 2024 00:39:08.489 read: IOPS=950, BW=3800KiB/s (3891kB/s)(10.8MiB/2898msec) 00:39:08.489 slat (usec): min=6, max=10996, avg=34.71, stdev=272.72 00:39:08.489 clat (usec): min=230, max=1548, avg=1007.34, stdev=99.13 00:39:08.489 lat (usec): min=239, max=12027, avg=1042.05, stdev=291.30 00:39:08.489 clat percentiles (usec): 00:39:08.489 | 1.00th=[ 709], 5.00th=[ 824], 10.00th=[ 889], 20.00th=[ 947], 00:39:08.489 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1037], 00:39:08.489 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:39:08.489 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1516], 00:39:08.489 | 99.99th=[ 1549] 00:39:08.489 bw ( KiB/s): min= 3704, max= 3936, per=40.06%, avg=3790.40, stdev=89.34, samples=5 00:39:08.489 iops : min= 926, max= 984, avg=947.60, stdev=22.33, samples=5 00:39:08.489 lat (usec) : 250=0.04%, 500=0.07%, 750=1.60%, 1000=40.05% 00:39:08.489 lat (msec) : 2=58.21% 00:39:08.489 cpu : usr=1.86%, sys=3.73%, ctx=2758, majf=0, minf=1 00:39:08.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:08.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 issued rwts: total=2754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:08.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:08.489 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649943: Tue Oct 8 17:55:00 2024 00:39:08.489 read: IOPS=932, BW=3729KiB/s (3819kB/s)(11.3MiB/3102msec) 00:39:08.489 slat (usec): min=5, max=13198, avg=45.39, stdev=410.45 00:39:08.489 clat (usec): min=239, max=5688, avg=1011.20, stdev=169.94 00:39:08.489 lat (usec): min=267, max=14366, avg=1056.59, stdev=438.17 00:39:08.489 clat percentiles (usec): 00:39:08.489 | 1.00th=[ 545], 5.00th=[ 668], 10.00th=[ 832], 20.00th=[ 947], 00:39:08.489 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:39:08.489 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:39:08.489 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1811], 99.95th=[ 2769], 00:39:08.489 | 99.99th=[ 5669] 00:39:08.489 bw ( KiB/s): min= 3696, max= 3824, per=39.52%, avg=3739.50, stdev=45.71, samples=6 00:39:08.489 iops : min= 924, max= 956, avg=934.83, stdev=11.43, samples=6 00:39:08.489 lat (usec) : 250=0.03%, 500=0.38%, 750=7.05%, 1000=27.76% 00:39:08.489 lat (msec) : 2=64.67%, 4=0.03%, 10=0.03% 00:39:08.489 cpu : usr=1.58%, sys=3.87%, ctx=2902, majf=0, minf=2 00:39:08.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:08.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 issued rwts: total=2893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:08.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:08.489 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649980: Tue Oct 8 17:55:00 2024 00:39:08.489 read: IOPS=579, BW=2316KiB/s (2372kB/s)(6344KiB/2739msec) 00:39:08.489 slat (usec): min=7, max=247, avg=25.42, stdev= 6.34 00:39:08.489 clat (usec): min=720, max=42013, avg=1681.28, stdev=4723.46 00:39:08.489 lat (usec): min=745, max=42039, avg=1706.69, stdev=4724.69 00:39:08.489 clat percentiles (usec): 00:39:08.489 | 1.00th=[ 824], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1045], 00:39:08.489 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:39:08.489 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1319], 00:39:08.489 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:08.489 | 99.99th=[42206] 00:39:08.489 bw ( KiB/s): min= 96, max= 3504, per=26.72%, avg=2528.00, stdev=1482.61, samples=5 00:39:08.489 iops : min= 24, max= 876, avg=632.00, stdev=370.65, samples=5 00:39:08.489 lat (usec) : 750=0.06%, 1000=11.28% 00:39:08.489 lat (msec) : 2=87.21%, 50=1.39% 00:39:08.489 cpu : usr=0.44%, sys=1.90%, ctx=1588, majf=0, minf=2 00:39:08.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:08.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 issued rwts: total=1587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:08.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:08.489 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=649993: Tue Oct 8 17:55:00 2024 00:39:08.489 read: IOPS=41, BW=165KiB/s (169kB/s)(424KiB/2575msec) 00:39:08.489 slat (nsec): min=2504, max=75833, avg=23780.68, stdev=8979.30 00:39:08.489 clat (usec): min=671, max=42238, avg=24174.21, stdev=20319.59 00:39:08.489 lat (usec): min=674, max=42263, avg=24197.97, stdev=20321.90 00:39:08.489 clat percentiles (usec): 00:39:08.489 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 930], 20.00th=[ 1057], 00:39:08.489 | 30.00th=[ 1139], 40.00th=[ 1270], 50.00th=[41681], 60.00th=[41681], 00:39:08.489 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:08.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:08.489 | 99.99th=[42206] 00:39:08.489 bw ( KiB/s): min= 96, max= 440, per=1.75%, avg=166.40, stdev=152.99, samples=5 00:39:08.489 iops : min= 24, max= 110, avg=41.60, stdev=38.25, samples=5 00:39:08.489 lat (usec) : 750=0.93%, 1000=14.02% 00:39:08.489 lat (msec) : 2=27.10%, 4=0.93%, 50=56.07% 00:39:08.489 cpu : usr=0.00%, sys=0.16%, ctx=107, majf=0, minf=2 00:39:08.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:08.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:08.489 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:08.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:08.489 00:39:08.489 Run status group 0 (all jobs): 00:39:08.489 READ: bw=9461KiB/s (9688kB/s), 165KiB/s-3800KiB/s (169kB/s-3891kB/s), io=28.7MiB (30.1MB), run=2575-3102msec 00:39:08.489 00:39:08.489 Disk stats (read/write): 00:39:08.489 nvme0n1: ios=2676/0, merge=0/0, ticks=3583/0, in_queue=3583, util=98.60% 00:39:08.489 nvme0n2: ios=2859/0, merge=0/0, ticks=2894/0, in_queue=2894, util=98.60% 00:39:08.489 nvme0n3: ios=1581/0, merge=0/0, ticks=2421/0, in_queue=2421, util=95.51% 00:39:08.489 nvme0n4: ios=105/0, merge=0/0, ticks=2520/0, in_queue=2520, util=96.34% 00:39:08.489 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.489 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:08.750 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.750 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:09.012 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:09.012 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 649675 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:09.272 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:09.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:09.532 nvmf hotplug test: fio failed as expected 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.532 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.533 rmmod nvme_tcp 00:39:09.793 rmmod nvme_fabrics 00:39:09.793 rmmod nvme_keyring 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 646507 ']' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 646507 ']' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646507' 00:39:09.793 killing process with pid 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 646507 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.793 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:12.340 00:39:12.340 real 0m28.135s 00:39:12.340 user 2m12.795s 00:39:12.340 sys 0m12.152s 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:12.340 ************************************ 00:39:12.340 END TEST nvmf_fio_target 00:39:12.340 ************************************ 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:12.340 ************************************ 00:39:12.340 START TEST nvmf_bdevio 00:39:12.340 ************************************ 00:39:12.340 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:12.340 * Looking for test storage... 00:39:12.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.340 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.341 --rc genhtml_branch_coverage=1 00:39:12.341 --rc genhtml_function_coverage=1 00:39:12.341 --rc genhtml_legend=1 00:39:12.341 --rc geninfo_all_blocks=1 00:39:12.341 --rc geninfo_unexecuted_blocks=1 00:39:12.341 00:39:12.341 ' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.341 --rc genhtml_branch_coverage=1 00:39:12.341 --rc genhtml_function_coverage=1 00:39:12.341 --rc genhtml_legend=1 00:39:12.341 --rc geninfo_all_blocks=1 00:39:12.341 --rc geninfo_unexecuted_blocks=1 00:39:12.341 00:39:12.341 ' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.341 --rc genhtml_branch_coverage=1 00:39:12.341 --rc genhtml_function_coverage=1 00:39:12.341 --rc genhtml_legend=1 00:39:12.341 --rc geninfo_all_blocks=1 00:39:12.341 --rc geninfo_unexecuted_blocks=1 00:39:12.341 00:39:12.341 ' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.341 --rc genhtml_branch_coverage=1 00:39:12.341 --rc genhtml_function_coverage=1 00:39:12.341 --rc genhtml_legend=1 00:39:12.341 --rc geninfo_all_blocks=1 00:39:12.341 --rc geninfo_unexecuted_blocks=1 00:39:12.341 00:39:12.341 ' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.341 17:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.482 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:20.483 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:20.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:20.483 Found net devices under 0000:31:00.0: cvl_0_0 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:20.483 Found net devices under 0000:31:00.1: cvl_0_1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:39:20.483 00:39:20.483 --- 10.0.0.2 ping statistics --- 00:39:20.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.483 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:39:20.483 00:39:20.483 --- 10.0.0.1 ping statistics --- 00:39:20.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.483 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=655187 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 655187 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 655187 ']' 00:39:20.483 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.484 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.484 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.484 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.484 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.484 [2024-10-08 17:55:11.900764] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.484 [2024-10-08 17:55:11.901876] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:39:20.484 [2024-10-08 17:55:11.901925] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.484 [2024-10-08 17:55:11.991584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.484 [2024-10-08 17:55:12.081889] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.484 [2024-10-08 17:55:12.081948] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.484 [2024-10-08 17:55:12.081956] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.484 [2024-10-08 17:55:12.081963] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.484 [2024-10-08 17:55:12.081970] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.484 [2024-10-08 17:55:12.084368] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:39:20.484 [2024-10-08 17:55:12.084527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:39:20.484 [2024-10-08 17:55:12.084719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:39:20.484 [2024-10-08 17:55:12.084720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:20.484 [2024-10-08 17:55:12.183208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:20.484 [2024-10-08 17:55:12.184112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:20.484 [2024-10-08 17:55:12.184359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:20.484 [2024-10-08 17:55:12.184860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:20.484 [2024-10-08 17:55:12.184986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:20.745 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:20.745 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:20.745 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:20.745 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.745 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 [2024-10-08 17:55:12.781678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 Malloc0 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.006 [2024-10-08 17:55:12.862018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:21.006 { 00:39:21.006 "params": { 00:39:21.006 "name": "Nvme$subsystem", 00:39:21.006 "trtype": "$TEST_TRANSPORT", 00:39:21.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:21.006 "adrfam": "ipv4", 00:39:21.006 "trsvcid": "$NVMF_PORT", 00:39:21.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:21.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:21.006 "hdgst": ${hdgst:-false}, 00:39:21.006 "ddgst": ${ddgst:-false} 00:39:21.006 }, 00:39:21.006 "method": "bdev_nvme_attach_controller" 00:39:21.006 } 00:39:21.006 EOF 00:39:21.006 )") 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:39:21.006 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:21.006 "params": { 00:39:21.006 "name": "Nvme1", 00:39:21.006 "trtype": "tcp", 00:39:21.006 "traddr": "10.0.0.2", 00:39:21.006 "adrfam": "ipv4", 00:39:21.006 "trsvcid": "4420", 00:39:21.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:21.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:21.006 "hdgst": false, 00:39:21.006 "ddgst": false 00:39:21.006 }, 00:39:21.006 "method": "bdev_nvme_attach_controller" 00:39:21.006 }' 00:39:21.006 [2024-10-08 17:55:12.921213] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:39:21.006 [2024-10-08 17:55:12.921285] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655307 ] 00:39:21.267 [2024-10-08 17:55:13.004876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:21.267 [2024-10-08 17:55:13.103537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.267 [2024-10-08 17:55:13.103701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.267 [2024-10-08 17:55:13.103701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.528 I/O targets: 00:39:21.528 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:21.528 00:39:21.528 00:39:21.528 CUnit - A unit testing framework for C - Version 2.1-3 00:39:21.528 http://cunit.sourceforge.net/ 00:39:21.528 00:39:21.528 00:39:21.528 Suite: bdevio tests on: Nvme1n1 00:39:21.528 Test: blockdev write read block ...passed 00:39:21.528 Test: blockdev write zeroes read block ...passed 00:39:21.789 Test: blockdev write zeroes read no split ...passed 00:39:21.789 Test: blockdev write zeroes read split ...passed 00:39:21.789 Test: blockdev write zeroes read split partial ...passed 00:39:21.789 Test: blockdev reset ...[2024-10-08 17:55:13.555901] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.789 [2024-10-08 17:55:13.556015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b5000 (9): Bad file descriptor 00:39:21.789 [2024-10-08 17:55:13.691938] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:21.789 passed 00:39:21.789 Test: blockdev write read 8 blocks ...passed 00:39:21.789 Test: blockdev write read size > 128k ...passed 00:39:21.789 Test: blockdev write read invalid size ...passed 00:39:21.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:21.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:21.789 Test: blockdev write read max offset ...passed 00:39:22.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:22.051 Test: blockdev writev readv 8 blocks ...passed 00:39:22.051 Test: blockdev writev readv 30 x 1block ...passed 00:39:22.051 Test: blockdev writev readv block ...passed 00:39:22.051 Test: blockdev writev readv size > 128k ...passed 00:39:22.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:22.051 Test: blockdev comparev and writev ...[2024-10-08 17:55:13.918201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.918260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.918277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.918286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.918811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.918824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.918838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.918846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.919350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.919361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.919375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.919383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.919783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.919794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:13.919808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:22.051 [2024-10-08 17:55:13.919815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:22.051 passed 00:39:22.051 Test: blockdev nvme passthru rw ...passed 00:39:22.051 Test: blockdev nvme passthru vendor specific ...[2024-10-08 17:55:14.003929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:22.051 [2024-10-08 17:55:14.003951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:14.004342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:22.051 [2024-10-08 17:55:14.004353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:14.004733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:22.051 [2024-10-08 17:55:14.004743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:22.051 [2024-10-08 17:55:14.005130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:22.051 [2024-10-08 17:55:14.005141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:22.051 passed 00:39:22.051 Test: blockdev nvme admin passthru ...passed 00:39:22.312 Test: blockdev copy ...passed 00:39:22.312 00:39:22.312 Run Summary: Type Total Ran Passed Failed Inactive 00:39:22.312 suites 1 1 n/a 0 0 00:39:22.312 tests 23 23 23 0 0 00:39:22.312 asserts 152 152 152 0 n/a 00:39:22.312 00:39:22.312 Elapsed time = 1.266 seconds 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:22.312 rmmod nvme_tcp 00:39:22.312 rmmod nvme_fabrics 00:39:22.312 rmmod nvme_keyring 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 655187 ']' 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 655187 00:39:22.312 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 655187 ']' 00:39:22.313 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 655187 00:39:22.313 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 655187 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 655187' 00:39:22.574 killing process with pid 655187 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 655187 00:39:22.574 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 655187 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.835 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.750 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:24.750 00:39:24.750 real 0m12.745s 00:39:24.750 user 0m11.158s 00:39:24.750 sys 0m6.699s 00:39:24.750 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.750 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:24.751 ************************************ 00:39:24.751 END TEST nvmf_bdevio 00:39:24.751 ************************************ 00:39:24.751 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:24.751 00:39:24.751 real 5m4.086s 00:39:24.751 user 10m14.595s 00:39:24.751 sys 2m4.132s 00:39:24.751 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.751 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:24.751 ************************************ 00:39:24.751 END TEST nvmf_target_core_interrupt_mode 00:39:24.751 ************************************ 00:39:25.012 17:55:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:25.012 17:55:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:25.012 17:55:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:25.012 17:55:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:25.012 ************************************ 00:39:25.012 START TEST nvmf_interrupt 00:39:25.012 ************************************ 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:25.012 * Looking for test storage... 00:39:25.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:25.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.012 --rc genhtml_branch_coverage=1 00:39:25.012 --rc genhtml_function_coverage=1 00:39:25.012 --rc genhtml_legend=1 00:39:25.012 --rc geninfo_all_blocks=1 00:39:25.012 --rc geninfo_unexecuted_blocks=1 00:39:25.012 00:39:25.012 ' 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:25.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.012 --rc genhtml_branch_coverage=1 00:39:25.012 --rc genhtml_function_coverage=1 00:39:25.012 --rc genhtml_legend=1 00:39:25.012 --rc geninfo_all_blocks=1 00:39:25.012 --rc geninfo_unexecuted_blocks=1 00:39:25.012 00:39:25.012 ' 00:39:25.012 17:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:25.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.012 --rc genhtml_branch_coverage=1 00:39:25.012 --rc genhtml_function_coverage=1 00:39:25.012 --rc genhtml_legend=1 00:39:25.012 --rc geninfo_all_blocks=1 00:39:25.012 --rc geninfo_unexecuted_blocks=1 00:39:25.012 00:39:25.012 ' 00:39:25.012 17:55:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:25.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.012 --rc genhtml_branch_coverage=1 00:39:25.012 --rc genhtml_function_coverage=1 00:39:25.012 --rc genhtml_legend=1 00:39:25.012 --rc geninfo_all_blocks=1 00:39:25.012 --rc geninfo_unexecuted_blocks=1 00:39:25.012 00:39:25.012 ' 00:39:25.012 17:55:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.012 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:25.274 17:55:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:33.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:33.413 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:33.413 Found net devices under 0000:31:00.0: cvl_0_0 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:33.413 Found net devices under 0000:31:00.1: cvl_0_1 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.413 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:33.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:33.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:39:33.414 00:39:33.414 --- 10.0.0.2 ping statistics --- 00:39:33.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.414 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:33.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:33.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:39:33.414 00:39:33.414 --- 10.0.0.1 ping statistics --- 00:39:33.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.414 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=659943 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 659943 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 659943 ']' 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:33.414 17:55:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.414 [2024-10-08 17:55:24.698167] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:33.414 [2024-10-08 17:55:24.699283] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:39:33.414 [2024-10-08 17:55:24.699329] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.414 [2024-10-08 17:55:24.788446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:33.414 [2024-10-08 17:55:24.886261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.414 [2024-10-08 17:55:24.886324] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.414 [2024-10-08 17:55:24.886337] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.414 [2024-10-08 17:55:24.886347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.414 [2024-10-08 17:55:24.886356] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.414 [2024-10-08 17:55:24.887715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.414 [2024-10-08 17:55:24.887714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.414 [2024-10-08 17:55:24.963957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:33.414 [2024-10-08 17:55:24.964673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:33.414 [2024-10-08 17:55:24.964949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:33.674 5000+0 records in 00:39:33.674 5000+0 records out 00:39:33.674 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191931 s, 534 MB/s 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.674 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.675 AIO0 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.675 [2024-10-08 17:55:25.640682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.675 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:33.935 [2024-10-08 17:55:25.693111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 659943 0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 0 idle 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659943 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0' 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659943 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 659943 1 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 1 idle 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:33.935 17:55:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659990 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659990 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=660153 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:34.196 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 659943 0 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 659943 0 busy 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:34.197 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659943 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.53 reactor_0' 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659943 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.53 reactor_0 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 659943 1 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 659943 1 busy 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:34.458 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659990 root 20 0 128.2g 44928 32256 R 87.5 0.0 0:00.28 reactor_1' 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659990 root 20 0 128.2g 44928 32256 R 87.5 0.0 0:00.28 reactor_1 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:34.720 17:55:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 660153 00:39:44.718 Initializing NVMe Controllers 00:39:44.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:44.718 Controller IO queue size 256, less than required. 00:39:44.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:44.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:44.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:44.718 Initialization complete. Launching workers. 00:39:44.718 ======================================================== 00:39:44.718 Latency(us) 00:39:44.718 Device Information : IOPS MiB/s Average min max 00:39:44.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19053.10 74.43 13440.84 4317.90 33972.30 00:39:44.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19997.20 78.11 12803.83 7398.53 30447.11 00:39:44.718 ======================================================== 00:39:44.718 Total : 39050.30 152.54 13114.64 4317.90 33972.30 00:39:44.718 00:39:44.718 [2024-10-08 17:55:36.209780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21504c0 is same with the state(6) to be set 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 659943 0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 0 idle 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659943 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.34 reactor_0' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659943 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.34 reactor_0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 659943 1 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 1 idle 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659990 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659990 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:44.718 17:55:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:45.660 17:55:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:45.660 17:55:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:39:45.660 17:55:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:45.660 17:55:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:45.660 17:55:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 659943 0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 0 idle 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659943 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.72 reactor_0' 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659943 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.72 reactor_0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 659943 1 00:39:47.570 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 659943 1 idle 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=659943 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 659943 -w 256 00:39:47.571 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 659990 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 659990 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:47.832 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:48.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:48.092 17:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.092 rmmod nvme_tcp 00:39:48.092 rmmod nvme_fabrics 00:39:48.092 rmmod nvme_keyring 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 659943 ']' 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 659943 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 659943 ']' 00:39:48.092 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 659943 00:39:48.352 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:39:48.352 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:48.352 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 659943 00:39:48.352 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:48.352 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 659943' 00:39:48.353 killing process with pid 659943 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 659943 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 659943 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:48.353 17:55:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.900 17:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.900 00:39:50.900 real 0m25.607s 00:39:50.900 user 0m40.397s 00:39:50.900 sys 0m9.747s 00:39:50.901 17:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:50.901 17:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:50.901 ************************************ 00:39:50.901 END TEST nvmf_interrupt 00:39:50.901 ************************************ 00:39:50.901 00:39:50.901 real 30m13.658s 00:39:50.901 user 61m39.433s 00:39:50.901 sys 10m10.343s 00:39:50.901 17:55:42 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:50.901 17:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.901 ************************************ 00:39:50.901 END TEST nvmf_tcp 00:39:50.901 ************************************ 00:39:50.901 17:55:42 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:50.901 17:55:42 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:50.901 17:55:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:50.901 17:55:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:50.901 17:55:42 -- common/autotest_common.sh@10 -- # set +x 00:39:50.901 ************************************ 00:39:50.901 START TEST spdkcli_nvmf_tcp 00:39:50.901 ************************************ 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:50.901 * Looking for test storage... 00:39:50.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:50.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.901 --rc genhtml_branch_coverage=1 00:39:50.901 --rc genhtml_function_coverage=1 00:39:50.901 --rc genhtml_legend=1 00:39:50.901 --rc geninfo_all_blocks=1 00:39:50.901 --rc geninfo_unexecuted_blocks=1 00:39:50.901 00:39:50.901 ' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:50.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.901 --rc genhtml_branch_coverage=1 00:39:50.901 --rc genhtml_function_coverage=1 00:39:50.901 --rc genhtml_legend=1 00:39:50.901 --rc geninfo_all_blocks=1 00:39:50.901 --rc geninfo_unexecuted_blocks=1 00:39:50.901 00:39:50.901 ' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:50.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.901 --rc genhtml_branch_coverage=1 00:39:50.901 --rc genhtml_function_coverage=1 00:39:50.901 --rc genhtml_legend=1 00:39:50.901 --rc geninfo_all_blocks=1 00:39:50.901 --rc geninfo_unexecuted_blocks=1 00:39:50.901 00:39:50.901 ' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:50.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.901 --rc genhtml_branch_coverage=1 00:39:50.901 --rc genhtml_function_coverage=1 00:39:50.901 --rc genhtml_legend=1 00:39:50.901 --rc geninfo_all_blocks=1 00:39:50.901 --rc geninfo_unexecuted_blocks=1 00:39:50.901 00:39:50.901 ' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:50.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.901 17:55:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=663521 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 663521 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 663521 ']' 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:50.902 17:55:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.902 [2024-10-08 17:55:42.830394] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:39:50.902 [2024-10-08 17:55:42.830449] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663521 ] 00:39:51.162 [2024-10-08 17:55:42.902520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:51.162 [2024-10-08 17:55:42.994398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.162 [2024-10-08 17:55:42.994402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.734 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:51.734 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:51.735 17:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:51.735 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:51.735 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:51.735 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:51.735 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:51.735 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:51.735 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:51.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:51.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:51.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:51.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:51.735 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:51.735 ' 00:39:55.036 [2024-10-08 17:55:46.510854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.978 [2024-10-08 17:55:47.867043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:58.520 [2024-10-08 17:55:50.382136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:01.067 [2024-10-08 17:55:52.604439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:02.452 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:02.452 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:02.452 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:02.452 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:02.452 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:02.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:02.452 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:02.452 17:55:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:03.023 17:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:03.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:03.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:03.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:03.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:03.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:03.023 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:03.023 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:03.023 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:03.023 ' 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:09.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:09.608 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:09.608 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:09.608 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 663521 ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 663521' 00:40:09.608 killing process with pid 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 663521 ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 663521 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 663521 ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 663521 00:40:09.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (663521) - No such process 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 663521 is not found' 00:40:09.608 Process with pid 663521 is not found 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:09.608 00:40:09.608 real 0m18.267s 00:40:09.608 user 0m40.389s 00:40:09.608 sys 0m1.040s 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.608 17:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.608 ************************************ 00:40:09.608 END TEST spdkcli_nvmf_tcp 00:40:09.608 ************************************ 00:40:09.608 17:56:00 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:09.608 17:56:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:09.608 17:56:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.608 17:56:00 -- common/autotest_common.sh@10 -- # set +x 00:40:09.608 ************************************ 00:40:09.608 START TEST nvmf_identify_passthru 00:40:09.608 ************************************ 00:40:09.608 17:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:09.608 * Looking for test storage... 00:40:09.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:09.608 17:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:09.608 17:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:40:09.608 17:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:09.608 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.608 17:56:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:09.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.609 --rc genhtml_branch_coverage=1 00:40:09.609 --rc genhtml_function_coverage=1 00:40:09.609 --rc genhtml_legend=1 00:40:09.609 --rc geninfo_all_blocks=1 00:40:09.609 --rc geninfo_unexecuted_blocks=1 00:40:09.609 00:40:09.609 ' 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:09.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.609 --rc genhtml_branch_coverage=1 00:40:09.609 --rc genhtml_function_coverage=1 00:40:09.609 --rc genhtml_legend=1 00:40:09.609 --rc geninfo_all_blocks=1 00:40:09.609 --rc geninfo_unexecuted_blocks=1 00:40:09.609 00:40:09.609 ' 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:09.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.609 --rc genhtml_branch_coverage=1 00:40:09.609 --rc genhtml_function_coverage=1 00:40:09.609 --rc genhtml_legend=1 00:40:09.609 --rc geninfo_all_blocks=1 00:40:09.609 --rc geninfo_unexecuted_blocks=1 00:40:09.609 00:40:09.609 ' 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:09.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.609 --rc genhtml_branch_coverage=1 00:40:09.609 --rc genhtml_function_coverage=1 00:40:09.609 --rc genhtml_legend=1 00:40:09.609 --rc geninfo_all_blocks=1 00:40:09.609 --rc geninfo_unexecuted_blocks=1 00:40:09.609 00:40:09.609 ' 00:40:09.609 17:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:09.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.609 17:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:09.609 17:56:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.609 17:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:09.609 17:56:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:09.609 17:56:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:17.757 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:17.758 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:17.758 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:17.758 Found net devices under 0000:31:00.0: cvl_0_0 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:17.758 Found net devices under 0000:31:00.1: cvl_0_1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:17.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:17.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:40:17.758 00:40:17.758 --- 10.0.0.2 ping statistics --- 00:40:17.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.758 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:17.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:40:17.758 00:40:17.758 --- 10.0.0.1 ping statistics --- 00:40:17.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.758 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:17.758 17:56:08 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:40:17.758 17:56:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:17.758 17:56:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:17.758 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:40:17.758 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:17.758 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:17.758 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=670949 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:18.020 17:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 670949 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 670949 ']' 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:18.020 17:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:18.020 [2024-10-08 17:56:09.943963] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:40:18.020 [2024-10-08 17:56:09.944044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:18.282 [2024-10-08 17:56:10.032150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:18.282 [2024-10-08 17:56:10.129362] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:18.282 [2024-10-08 17:56:10.129431] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:18.282 [2024-10-08 17:56:10.129443] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:18.282 [2024-10-08 17:56:10.129452] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:18.282 [2024-10-08 17:56:10.129467] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:18.282 [2024-10-08 17:56:10.131701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:18.282 [2024-10-08 17:56:10.131864] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:18.282 [2024-10-08 17:56:10.132042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:18.282 [2024-10-08 17:56:10.132105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:18.855 17:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:18.855 INFO: Log level set to 20 00:40:18.855 INFO: Requests: 00:40:18.855 { 00:40:18.855 "jsonrpc": "2.0", 00:40:18.855 "method": "nvmf_set_config", 00:40:18.855 "id": 1, 00:40:18.855 "params": { 00:40:18.855 "admin_cmd_passthru": { 00:40:18.855 "identify_ctrlr": true 00:40:18.855 } 00:40:18.855 } 00:40:18.855 } 00:40:18.855 00:40:18.855 INFO: response: 00:40:18.855 { 00:40:18.855 "jsonrpc": "2.0", 00:40:18.855 "id": 1, 00:40:18.855 "result": true 00:40:18.855 } 00:40:18.855 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.855 17:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.855 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:18.855 INFO: Setting log level to 20 00:40:18.855 INFO: Setting log level to 20 00:40:18.855 INFO: Log level set to 20 00:40:18.855 INFO: Log level set to 20 00:40:18.855 INFO: Requests: 00:40:18.855 { 00:40:18.855 "jsonrpc": "2.0", 00:40:18.855 "method": "framework_start_init", 00:40:18.855 "id": 1 00:40:18.855 } 00:40:18.855 00:40:18.855 INFO: Requests: 00:40:18.855 { 00:40:18.855 "jsonrpc": "2.0", 00:40:18.855 "method": "framework_start_init", 00:40:18.855 "id": 1 00:40:18.855 } 00:40:18.855 00:40:19.116 [2024-10-08 17:56:10.879374] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:19.116 INFO: response: 00:40:19.116 { 00:40:19.116 "jsonrpc": "2.0", 00:40:19.116 "id": 1, 00:40:19.116 "result": true 00:40:19.116 } 00:40:19.116 00:40:19.116 INFO: response: 00:40:19.116 { 00:40:19.116 "jsonrpc": "2.0", 00:40:19.116 "id": 1, 00:40:19.116 "result": true 00:40:19.116 } 00:40:19.116 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.116 17:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.116 INFO: Setting log level to 40 00:40:19.116 INFO: Setting log level to 40 00:40:19.116 INFO: Setting log level to 40 00:40:19.116 [2024-10-08 17:56:10.893022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.116 17:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.116 17:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.116 17:56:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.379 Nvme0n1 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.379 [2024-10-08 17:56:11.284844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.379 [ 00:40:19.379 { 00:40:19.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:19.379 "subtype": "Discovery", 00:40:19.379 "listen_addresses": [], 00:40:19.379 "allow_any_host": true, 00:40:19.379 "hosts": [] 00:40:19.379 }, 00:40:19.379 { 00:40:19.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.379 "subtype": "NVMe", 00:40:19.379 "listen_addresses": [ 00:40:19.379 { 00:40:19.379 "trtype": "TCP", 00:40:19.379 "adrfam": "IPv4", 00:40:19.379 "traddr": "10.0.0.2", 00:40:19.379 "trsvcid": "4420" 00:40:19.379 } 00:40:19.379 ], 00:40:19.379 "allow_any_host": true, 00:40:19.379 "hosts": [], 00:40:19.379 "serial_number": "SPDK00000000000001", 00:40:19.379 "model_number": "SPDK bdev Controller", 00:40:19.379 "max_namespaces": 1, 00:40:19.379 "min_cntlid": 1, 00:40:19.379 "max_cntlid": 65519, 00:40:19.379 "namespaces": [ 00:40:19.379 { 00:40:19.379 "nsid": 1, 00:40:19.379 "bdev_name": "Nvme0n1", 00:40:19.379 "name": "Nvme0n1", 00:40:19.379 "nguid": "3634473052605494002538450000002B", 00:40:19.379 "uuid": "36344730-5260-5494-0025-38450000002b" 00:40:19.379 } 00:40:19.379 ] 00:40:19.379 } 00:40:19.379 ] 00:40:19.379 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:19.379 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:19.640 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:40:19.640 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:19.640 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:19.640 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:19.900 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.900 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.900 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:19.900 17:56:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:19.900 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:19.900 rmmod nvme_tcp 00:40:20.161 rmmod nvme_fabrics 00:40:20.161 rmmod nvme_keyring 00:40:20.161 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.161 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:20.161 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:20.161 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 670949 ']' 00:40:20.161 17:56:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 670949 00:40:20.161 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 670949 ']' 00:40:20.161 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 670949 00:40:20.161 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:20.161 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:20.161 17:56:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670949 00:40:20.161 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:20.161 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:20.161 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670949' 00:40:20.162 killing process with pid 670949 00:40:20.162 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 670949 00:40:20.162 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 670949 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:20.423 17:56:12 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.423 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:20.423 17:56:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.969 17:56:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:22.969 00:40:22.969 real 0m13.546s 00:40:22.969 user 0m11.019s 00:40:22.969 sys 0m6.813s 00:40:22.969 17:56:14 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:22.969 17:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.969 ************************************ 00:40:22.969 END TEST nvmf_identify_passthru 00:40:22.969 ************************************ 00:40:22.969 17:56:14 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:22.969 17:56:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:22.969 17:56:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:22.969 17:56:14 -- common/autotest_common.sh@10 -- # set +x 00:40:22.969 ************************************ 00:40:22.969 START TEST nvmf_dif 00:40:22.969 ************************************ 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:22.969 * Looking for test storage... 00:40:22.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:22.969 17:56:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.969 --rc genhtml_branch_coverage=1 00:40:22.969 --rc genhtml_function_coverage=1 00:40:22.969 --rc genhtml_legend=1 00:40:22.969 --rc geninfo_all_blocks=1 00:40:22.969 --rc geninfo_unexecuted_blocks=1 00:40:22.969 00:40:22.969 ' 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.969 --rc genhtml_branch_coverage=1 00:40:22.969 --rc genhtml_function_coverage=1 00:40:22.969 --rc genhtml_legend=1 00:40:22.969 --rc geninfo_all_blocks=1 00:40:22.969 --rc geninfo_unexecuted_blocks=1 00:40:22.969 00:40:22.969 ' 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.969 --rc genhtml_branch_coverage=1 00:40:22.969 --rc genhtml_function_coverage=1 00:40:22.969 --rc genhtml_legend=1 00:40:22.969 --rc geninfo_all_blocks=1 00:40:22.969 --rc geninfo_unexecuted_blocks=1 00:40:22.969 00:40:22.969 ' 00:40:22.969 17:56:14 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.969 --rc genhtml_branch_coverage=1 00:40:22.969 --rc genhtml_function_coverage=1 00:40:22.969 --rc genhtml_legend=1 00:40:22.969 --rc geninfo_all_blocks=1 00:40:22.969 --rc geninfo_unexecuted_blocks=1 00:40:22.969 00:40:22.969 ' 00:40:22.969 17:56:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:22.969 17:56:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.970 17:56:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:22.970 17:56:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.970 17:56:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.970 17:56:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.970 17:56:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.970 17:56:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.970 17:56:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.970 17:56:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:22.970 17:56:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:22.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:22.970 17:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:22.970 17:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:22.970 17:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:22.970 17:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:22.970 17:56:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.970 17:56:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:22.970 17:56:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:22.970 17:56:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:22.970 17:56:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:31.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:31.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:31.115 17:56:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:31.116 Found net devices under 0000:31:00.0: cvl_0_0 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:31.116 Found net devices under 0000:31:00.1: cvl_0_1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:31.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:40:31.116 00:40:31.116 --- 10.0.0.2 ping statistics --- 00:40:31.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.116 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:40:31.116 00:40:31.116 --- 10.0.0.1 ping statistics --- 00:40:31.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.116 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:40:31.116 17:56:22 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:34.425 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:34.425 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:34.425 17:56:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:34.425 17:56:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=677120 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 677120 00:40:34.425 17:56:26 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 677120 ']' 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:34.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:34.425 17:56:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:34.425 [2024-10-08 17:56:26.321925] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:40:34.425 [2024-10-08 17:56:26.321995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:34.425 [2024-10-08 17:56:26.411470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.687 [2024-10-08 17:56:26.506557] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:34.687 [2024-10-08 17:56:26.506613] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:34.687 [2024-10-08 17:56:26.506624] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:34.687 [2024-10-08 17:56:26.506635] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:34.687 [2024-10-08 17:56:26.506644] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:34.687 [2024-10-08 17:56:26.507400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.259 17:56:27 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:35.260 17:56:27 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.260 17:56:27 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.260 17:56:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:35.260 17:56:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.260 [2024-10-08 17:56:27.185119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.260 17:56:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:35.260 17:56:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.260 ************************************ 00:40:35.260 START TEST fio_dif_1_default 00:40:35.260 ************************************ 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.260 bdev_null0 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.260 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.522 [2024-10-08 17:56:27.273551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:35.522 { 00:40:35.522 "params": { 00:40:35.522 "name": "Nvme$subsystem", 00:40:35.522 "trtype": "$TEST_TRANSPORT", 00:40:35.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.522 "adrfam": "ipv4", 00:40:35.522 "trsvcid": "$NVMF_PORT", 00:40:35.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.522 "hdgst": ${hdgst:-false}, 00:40:35.522 "ddgst": ${ddgst:-false} 00:40:35.522 }, 00:40:35.522 "method": "bdev_nvme_attach_controller" 00:40:35.522 } 00:40:35.522 EOF 00:40:35.522 )") 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:35.522 "params": { 00:40:35.522 "name": "Nvme0", 00:40:35.522 "trtype": "tcp", 00:40:35.522 "traddr": "10.0.0.2", 00:40:35.522 "adrfam": "ipv4", 00:40:35.522 "trsvcid": "4420", 00:40:35.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:35.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:35.522 "hdgst": false, 00:40:35.522 "ddgst": false 00:40:35.522 }, 00:40:35.522 "method": "bdev_nvme_attach_controller" 00:40:35.522 }' 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:35.522 17:56:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.784 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:35.784 fio-3.35 00:40:35.784 Starting 1 thread 00:40:48.087 00:40:48.087 filename0: (groupid=0, jobs=1): err= 0: pid=677637: Tue Oct 8 17:56:38 2024 00:40:48.087 read: IOPS=332, BW=1331KiB/s (1363kB/s)(13.0MiB/10026msec) 00:40:48.087 slat (nsec): min=5383, max=71480, avg=6956.22, stdev=1965.72 00:40:48.087 clat (usec): min=477, max=42997, avg=12001.65, stdev=18004.01 00:40:48.087 lat (usec): min=482, max=43021, avg=12008.61, stdev=18003.46 00:40:48.087 clat percentiles (usec): 00:40:48.087 | 1.00th=[ 619], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 848], 00:40:48.087 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:40:48.087 | 70.00th=[ 1037], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:48.087 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:48.087 | 99.99th=[43254] 00:40:48.087 bw ( KiB/s): min= 768, max= 4608, per=100.00%, avg=1332.80, stdev=1098.32, samples=20 00:40:48.087 iops : min= 192, max= 1152, avg=333.20, stdev=274.58, samples=20 00:40:48.087 lat (usec) : 500=0.12%, 750=2.55%, 1000=54.08% 00:40:48.087 lat (msec) : 2=15.80%, 50=27.46% 00:40:48.087 cpu : usr=92.99%, sys=6.74%, ctx=16, majf=0, minf=281 00:40:48.087 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.087 issued rwts: total=3336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.087 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:48.087 00:40:48.087 Run status group 0 (all jobs): 00:40:48.087 READ: bw=1331KiB/s (1363kB/s), 1331KiB/s-1331KiB/s (1363kB/s-1363kB/s), io=13.0MiB (13.7MB), run=10026-10026msec 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 00:40:48.087 real 0m11.174s 00:40:48.087 user 0m17.537s 00:40:48.087 sys 0m1.109s 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 ************************************ 00:40:48.087 END TEST fio_dif_1_default 00:40:48.087 ************************************ 00:40:48.087 17:56:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:48.087 17:56:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:48.087 17:56:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 ************************************ 00:40:48.087 START TEST fio_dif_1_multi_subsystems 00:40:48.087 ************************************ 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 bdev_null0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 [2024-10-08 17:56:38.525336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 bdev_null1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:48.087 { 00:40:48.087 "params": { 00:40:48.087 "name": "Nvme$subsystem", 00:40:48.087 "trtype": "$TEST_TRANSPORT", 00:40:48.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:48.087 "adrfam": "ipv4", 00:40:48.087 "trsvcid": "$NVMF_PORT", 00:40:48.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:48.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:48.087 "hdgst": ${hdgst:-false}, 00:40:48.087 "ddgst": ${ddgst:-false} 00:40:48.087 }, 00:40:48.087 "method": "bdev_nvme_attach_controller" 00:40:48.087 } 00:40:48.087 EOF 00:40:48.087 )") 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.087 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:48.088 { 00:40:48.088 "params": { 00:40:48.088 "name": "Nvme$subsystem", 00:40:48.088 "trtype": "$TEST_TRANSPORT", 00:40:48.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:48.088 "adrfam": "ipv4", 00:40:48.088 "trsvcid": "$NVMF_PORT", 00:40:48.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:48.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:48.088 "hdgst": ${hdgst:-false}, 00:40:48.088 "ddgst": ${ddgst:-false} 00:40:48.088 }, 00:40:48.088 "method": "bdev_nvme_attach_controller" 00:40:48.088 } 00:40:48.088 EOF 00:40:48.088 )") 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:48.088 "params": { 00:40:48.088 "name": "Nvme0", 00:40:48.088 "trtype": "tcp", 00:40:48.088 "traddr": "10.0.0.2", 00:40:48.088 "adrfam": "ipv4", 00:40:48.088 "trsvcid": "4420", 00:40:48.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:48.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:48.088 "hdgst": false, 00:40:48.088 "ddgst": false 00:40:48.088 }, 00:40:48.088 "method": "bdev_nvme_attach_controller" 00:40:48.088 },{ 00:40:48.088 "params": { 00:40:48.088 "name": "Nvme1", 00:40:48.088 "trtype": "tcp", 00:40:48.088 "traddr": "10.0.0.2", 00:40:48.088 "adrfam": "ipv4", 00:40:48.088 "trsvcid": "4420", 00:40:48.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:48.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:48.088 "hdgst": false, 00:40:48.088 "ddgst": false 00:40:48.088 }, 00:40:48.088 "method": "bdev_nvme_attach_controller" 00:40:48.088 }' 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:48.088 17:56:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:48.088 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:48.088 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:48.088 fio-3.35 00:40:48.088 Starting 2 threads 00:40:58.327 00:40:58.327 filename0: (groupid=0, jobs=1): err= 0: pid=680054: Tue Oct 8 17:56:49 2024 00:40:58.327 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10009msec) 00:40:58.327 slat (nsec): min=5382, max=35137, avg=6298.44, stdev=2045.25 00:40:58.327 clat (usec): min=828, max=42156, avg=40666.49, stdev=3607.63 00:40:58.327 lat (usec): min=834, max=42192, avg=40672.79, stdev=3607.71 00:40:58.327 clat percentiles (usec): 00:40:58.327 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:58.327 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:58.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:58.327 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:58.327 | 99.99th=[42206] 00:40:58.327 bw ( KiB/s): min= 384, max= 416, per=34.07%, avg=392.00, stdev=14.22, samples=20 00:40:58.327 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:40:58.327 lat (usec) : 1000=0.81% 00:40:58.327 lat (msec) : 50=99.19% 00:40:58.327 cpu : usr=95.34%, sys=4.44%, ctx=8, majf=0, minf=119 00:40:58.327 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.327 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.327 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:58.327 filename1: (groupid=0, jobs=1): err= 0: pid=680055: Tue Oct 8 17:56:49 2024 00:40:58.327 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:40:58.328 slat (nsec): min=5376, max=54272, avg=6195.65, stdev=1966.85 00:40:58.328 clat (usec): min=497, max=42365, avg=21076.25, stdev=20161.59 00:40:58.328 lat (usec): min=503, max=42371, avg=21082.45, stdev=20161.54 00:40:58.328 clat percentiles (usec): 00:40:58.328 | 1.00th=[ 603], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 832], 00:40:58.328 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[40633], 60.00th=[41157], 00:40:58.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:58.328 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:40:58.328 | 99.99th=[42206] 00:40:58.328 bw ( KiB/s): min= 672, max= 768, per=66.06%, avg=760.00, stdev=25.16, samples=20 00:40:58.328 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:40:58.328 lat (usec) : 500=0.05%, 750=2.10%, 1000=47.22% 00:40:58.328 lat (msec) : 2=0.42%, 50=50.21% 00:40:58.328 cpu : usr=95.56%, sys=4.22%, ctx=11, majf=0, minf=212 00:40:58.328 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.328 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.328 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:58.328 00:40:58.328 Run status group 0 (all jobs): 00:40:58.328 READ: bw=1150KiB/s (1178kB/s), 393KiB/s-758KiB/s (403kB/s-777kB/s), io=11.3MiB (11.8MB), run=10009-10041msec 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 00:40:58.328 real 0m11.517s 00:40:58.328 user 0m31.463s 00:40:58.328 sys 0m1.258s 00:40:58.328 17:56:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 ************************************ 00:40:58.328 END TEST fio_dif_1_multi_subsystems 00:40:58.328 ************************************ 00:40:58.328 17:56:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:58.328 17:56:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:58.328 17:56:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 ************************************ 00:40:58.328 START TEST fio_dif_rand_params 00:40:58.328 ************************************ 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 bdev_null0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.328 [2024-10-08 17:56:50.128006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:58.328 { 00:40:58.328 "params": { 00:40:58.328 "name": "Nvme$subsystem", 00:40:58.328 "trtype": "$TEST_TRANSPORT", 00:40:58.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.328 "adrfam": "ipv4", 00:40:58.328 "trsvcid": "$NVMF_PORT", 00:40:58.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.328 "hdgst": ${hdgst:-false}, 00:40:58.328 "ddgst": ${ddgst:-false} 00:40:58.328 }, 00:40:58.328 "method": "bdev_nvme_attach_controller" 00:40:58.328 } 00:40:58.328 EOF 00:40:58.328 )") 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:58.328 "params": { 00:40:58.328 "name": "Nvme0", 00:40:58.328 "trtype": "tcp", 00:40:58.328 "traddr": "10.0.0.2", 00:40:58.328 "adrfam": "ipv4", 00:40:58.328 "trsvcid": "4420", 00:40:58.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:58.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:58.328 "hdgst": false, 00:40:58.328 "ddgst": false 00:40:58.328 }, 00:40:58.328 "method": "bdev_nvme_attach_controller" 00:40:58.328 }' 00:40:58.328 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:58.329 17:56:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.589 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:58.589 ... 00:40:58.589 fio-3.35 00:40:58.589 Starting 3 threads 00:41:05.174 00:41:05.174 filename0: (groupid=0, jobs=1): err= 0: pid=682265: Tue Oct 8 17:56:56 2024 00:41:05.175 read: IOPS=348, BW=43.6MiB/s (45.7MB/s)(218MiB/5007msec) 00:41:05.175 slat (nsec): min=5565, max=37052, avg=8848.47, stdev=1478.80 00:41:05.175 clat (usec): min=4689, max=87241, avg=8589.72, stdev=4917.09 00:41:05.175 lat (usec): min=4695, max=87251, avg=8598.57, stdev=4917.23 00:41:05.175 clat percentiles (usec): 00:41:05.175 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7177], 00:41:05.175 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:41:05.175 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9896], 00:41:05.175 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50594], 99.95th=[87557], 00:41:05.175 | 99.99th=[87557] 00:41:05.175 bw ( KiB/s): min=34048, max=47872, per=35.35%, avg=44620.80, stdev=4189.25, samples=10 00:41:05.175 iops : min= 266, max= 374, avg=348.60, stdev=32.73, samples=10 00:41:05.175 lat (msec) : 10=95.65%, 20=3.04%, 50=1.20%, 100=0.11% 00:41:05.175 cpu : usr=93.35%, sys=6.37%, ctx=7, majf=0, minf=67 00:41:05.175 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:05.175 filename0: (groupid=0, jobs=1): err= 0: pid=682266: Tue Oct 8 17:56:56 2024 00:41:05.175 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(177MiB/5045msec) 00:41:05.175 slat (nsec): min=5675, max=32814, avg=9195.02, stdev=1988.89 00:41:05.175 clat (usec): min=5182, max=90063, avg=10622.88, stdev=5794.60 00:41:05.175 lat (usec): min=5207, max=90073, avg=10632.07, stdev=5794.51 00:41:05.175 clat percentiles (usec): 00:41:05.175 | 1.00th=[ 6325], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8717], 00:41:05.175 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:41:05.175 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:41:05.175 | 99.00th=[48497], 99.50th=[49546], 99.90th=[89654], 99.95th=[89654], 00:41:05.175 | 99.99th=[89654] 00:41:05.175 bw ( KiB/s): min=25344, max=39936, per=28.74%, avg=36275.20, stdev=4178.81, samples=10 00:41:05.175 iops : min= 198, max= 312, avg=283.40, stdev=32.65, samples=10 00:41:05.175 lat (msec) : 10=45.17%, 20=53.14%, 50=1.41%, 100=0.28% 00:41:05.175 cpu : usr=95.38%, sys=4.34%, ctx=11, majf=0, minf=78 00:41:05.175 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:05.175 filename0: (groupid=0, jobs=1): err= 0: pid=682267: Tue Oct 8 17:56:56 2024 00:41:05.175 read: IOPS=358, BW=44.9MiB/s (47.0MB/s)(226MiB/5044msec) 00:41:05.175 slat (nsec): min=7879, max=48268, avg=9289.95, stdev=1841.22 00:41:05.175 clat (usec): min=3904, max=48611, avg=8345.66, stdev=3683.12 00:41:05.175 lat (usec): min=3912, max=48620, avg=8354.95, stdev=3683.09 00:41:05.175 clat percentiles (usec): 00:41:05.175 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7177], 00:41:05.175 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8455], 00:41:05.175 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[ 9896], 00:41:05.175 | 99.00th=[11076], 99.50th=[44827], 99.90th=[48497], 99.95th=[48497], 00:41:05.175 | 99.99th=[48497] 00:41:05.175 bw ( KiB/s): min=40704, max=48993, per=36.66%, avg=46268.90, stdev=2280.23, samples=10 00:41:05.175 iops : min= 318, max= 382, avg=361.40, stdev=17.72, samples=10 00:41:05.175 lat (msec) : 4=0.06%, 10=95.25%, 20=3.87%, 50=0.83% 00:41:05.175 cpu : usr=95.06%, sys=4.64%, ctx=31, majf=0, minf=148 00:41:05.175 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.175 issued rwts: total=1810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.175 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:05.175 00:41:05.175 Run status group 0 (all jobs): 00:41:05.175 READ: bw=123MiB/s (129MB/s), 35.2MiB/s-44.9MiB/s (36.9MB/s-47.0MB/s), io=622MiB (652MB), run=5007-5045msec 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 bdev_null0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 [2024-10-08 17:56:56.395613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 bdev_null1 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 bdev_null2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:05.175 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:05.176 { 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme$subsystem", 00:41:05.176 "trtype": "$TEST_TRANSPORT", 00:41:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "$NVMF_PORT", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.176 "hdgst": ${hdgst:-false}, 00:41:05.176 "ddgst": ${ddgst:-false} 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 } 00:41:05.176 EOF 00:41:05.176 )") 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:05.176 { 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme$subsystem", 00:41:05.176 "trtype": "$TEST_TRANSPORT", 00:41:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "$NVMF_PORT", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.176 "hdgst": ${hdgst:-false}, 00:41:05.176 "ddgst": ${ddgst:-false} 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 } 00:41:05.176 EOF 00:41:05.176 )") 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:05.176 { 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme$subsystem", 00:41:05.176 "trtype": "$TEST_TRANSPORT", 00:41:05.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "$NVMF_PORT", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.176 "hdgst": ${hdgst:-false}, 00:41:05.176 "ddgst": ${ddgst:-false} 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 } 00:41:05.176 EOF 00:41:05.176 )") 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme0", 00:41:05.176 "trtype": "tcp", 00:41:05.176 "traddr": "10.0.0.2", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "4420", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:05.176 "hdgst": false, 00:41:05.176 "ddgst": false 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 },{ 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme1", 00:41:05.176 "trtype": "tcp", 00:41:05.176 "traddr": "10.0.0.2", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "4420", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:05.176 "hdgst": false, 00:41:05.176 "ddgst": false 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 },{ 00:41:05.176 "params": { 00:41:05.176 "name": "Nvme2", 00:41:05.176 "trtype": "tcp", 00:41:05.176 "traddr": "10.0.0.2", 00:41:05.176 "adrfam": "ipv4", 00:41:05.176 "trsvcid": "4420", 00:41:05.176 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:05.176 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:05.176 "hdgst": false, 00:41:05.176 "ddgst": false 00:41:05.176 }, 00:41:05.176 "method": "bdev_nvme_attach_controller" 00:41:05.176 }' 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:05.176 17:56:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:05.176 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:05.176 ... 00:41:05.176 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:05.176 ... 00:41:05.176 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:05.176 ... 00:41:05.176 fio-3.35 00:41:05.176 Starting 24 threads 00:41:17.411 00:41:17.411 filename0: (groupid=0, jobs=1): err= 0: pid=683777: Tue Oct 8 17:57:08 2024 00:41:17.411 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10020msec) 00:41:17.411 slat (nsec): min=5562, max=85649, avg=13306.98, stdev=11140.69 00:41:17.411 clat (usec): min=10143, max=33406, avg=23719.20, stdev=1153.27 00:41:17.411 lat (usec): min=10149, max=33413, avg=23732.51, stdev=1153.71 00:41:17.411 clat percentiles (usec): 00:41:17.411 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.411 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.411 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.411 | 99.00th=[25035], 99.50th=[25297], 99.90th=[30016], 99.95th=[31065], 00:41:17.411 | 99.99th=[33424] 00:41:17.411 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2681.60, stdev=63.87, samples=20 00:41:17.411 iops : min= 640, max= 704, avg=670.40, stdev=15.97, samples=20 00:41:17.411 lat (msec) : 20=1.25%, 50=98.75% 00:41:17.411 cpu : usr=98.33%, sys=1.11%, ctx=125, majf=0, minf=9 00:41:17.411 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:17.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.411 filename0: (groupid=0, jobs=1): err= 0: pid=683778: Tue Oct 8 17:57:08 2024 00:41:17.411 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10006msec) 00:41:17.411 slat (nsec): min=5560, max=74721, avg=7459.41, stdev=4312.82 00:41:17.411 clat (usec): min=10326, max=25396, avg=23590.58, stdev=1590.57 00:41:17.411 lat (usec): min=10332, max=25403, avg=23598.04, stdev=1589.70 00:41:17.411 clat percentiles (usec): 00:41:17.411 | 1.00th=[14484], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:41:17.411 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:17.411 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:41:17.411 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:41:17.411 | 99.99th=[25297] 00:41:17.411 bw ( KiB/s): min= 2560, max= 2816, per=4.19%, avg=2700.80, stdev=57.24, samples=20 00:41:17.411 iops : min= 640, max= 704, avg=675.20, stdev=14.31, samples=20 00:41:17.411 lat (msec) : 20=3.31%, 50=96.69% 00:41:17.411 cpu : usr=98.74%, sys=0.81%, ctx=116, majf=0, minf=9 00:41:17.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:17.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.411 filename0: (groupid=0, jobs=1): err= 0: pid=683779: Tue Oct 8 17:57:08 2024 00:41:17.411 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10015msec) 00:41:17.411 slat (nsec): min=5269, max=94922, avg=25078.45, stdev=16258.27 00:41:17.411 clat (usec): min=12692, max=41294, avg=23666.05, stdev=1698.00 00:41:17.411 lat (usec): min=12704, max=41309, avg=23691.13, stdev=1698.19 00:41:17.411 clat percentiles (usec): 00:41:17.411 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:41:17.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.411 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.411 | 99.00th=[30278], 99.50th=[33424], 99.90th=[39584], 99.95th=[41157], 00:41:17.411 | 99.99th=[41157] 00:41:17.411 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2675.85, stdev=65.33, samples=20 00:41:17.411 iops : min= 640, max= 704, avg=668.95, stdev=16.36, samples=20 00:41:17.411 lat (msec) : 20=2.30%, 50=97.70% 00:41:17.411 cpu : usr=98.60%, sys=0.94%, ctx=119, majf=0, minf=9 00:41:17.411 IO depths : 1=5.6%, 2=11.2%, 4=23.3%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:17.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.411 issued rwts: total=6705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.411 filename0: (groupid=0, jobs=1): err= 0: pid=683780: Tue Oct 8 17:57:08 2024 00:41:17.411 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.1MiB/10029msec) 00:41:17.411 slat (nsec): min=5605, max=74728, avg=22662.84, stdev=12979.22 00:41:17.411 clat (usec): min=16399, max=55301, avg=23841.92, stdev=1682.61 00:41:17.411 lat (usec): min=16406, max=55308, avg=23864.58, stdev=1681.89 00:41:17.411 clat percentiles (usec): 00:41:17.412 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.412 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25297], 99.50th=[31065], 99.90th=[55313], 99.95th=[55313], 00:41:17.412 | 99.99th=[55313] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2667.79, stdev=64.19, samples=19 00:41:17.412 iops : min= 640, max= 704, avg=666.95, stdev=16.05, samples=19 00:41:17.412 lat (msec) : 20=0.06%, 50=99.70%, 100=0.24% 00:41:17.412 cpu : usr=98.91%, sys=0.81%, ctx=11, majf=0, minf=9 00:41:17.412 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename0: (groupid=0, jobs=1): err= 0: pid=683781: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=701, BW=2807KiB/s (2874kB/s)(27.5MiB/10039msec) 00:41:17.412 slat (nsec): min=5349, max=95774, avg=16460.70, stdev=14267.24 00:41:17.412 clat (usec): min=9304, max=50411, avg=22664.69, stdev=4615.91 00:41:17.412 lat (usec): min=9312, max=50417, avg=22681.16, stdev=4618.74 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[13173], 5.00th=[15533], 10.00th=[16319], 20.00th=[18744], 00:41:17.412 | 30.00th=[21365], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[30278], 00:41:17.412 | 99.00th=[39060], 99.50th=[40633], 99.90th=[43254], 99.95th=[50594], 00:41:17.412 | 99.99th=[50594] 00:41:17.412 bw ( KiB/s): min= 2560, max= 3136, per=4.36%, avg=2811.45, stdev=163.19, samples=20 00:41:17.412 iops : min= 640, max= 784, avg=702.85, stdev=40.82, samples=20 00:41:17.412 lat (msec) : 10=0.14%, 20=25.67%, 50=74.13%, 100=0.06% 00:41:17.412 cpu : usr=98.52%, sys=1.11%, ctx=79, majf=0, minf=9 00:41:17.412 IO depths : 1=1.8%, 2=3.8%, 4=11.2%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=7044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename0: (groupid=0, jobs=1): err= 0: pid=683782: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.2MiB/10040msec) 00:41:17.412 slat (nsec): min=5557, max=78373, avg=18782.42, stdev=13246.50 00:41:17.412 clat (usec): min=12935, max=46546, avg=23804.57, stdev=1385.41 00:41:17.412 lat (usec): min=12943, max=46555, avg=23823.35, stdev=1384.54 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.412 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25035], 99.50th=[25560], 99.90th=[46400], 99.95th=[46400], 00:41:17.412 | 99.99th=[46400] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2675.20, stdev=39.40, samples=20 00:41:17.412 iops : min= 640, max= 672, avg=668.80, stdev= 9.85, samples=20 00:41:17.412 lat (msec) : 20=0.48%, 50=99.52% 00:41:17.412 cpu : usr=98.82%, sys=0.88%, ctx=17, majf=0, minf=9 00:41:17.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename0: (groupid=0, jobs=1): err= 0: pid=683783: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.1MiB/10021msec) 00:41:17.412 slat (nsec): min=5708, max=92018, avg=24405.69, stdev=15152.33 00:41:17.412 clat (usec): min=21889, max=44244, avg=23806.82, stdev=1433.47 00:41:17.412 lat (usec): min=21897, max=44260, avg=23831.23, stdev=1432.31 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25035], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:41:17.412 | 99.99th=[44303] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2668.05, stdev=47.34, samples=19 00:41:17.412 iops : min= 640, max= 672, avg=667.00, stdev=11.86, samples=19 00:41:17.412 lat (msec) : 50=100.00% 00:41:17.412 cpu : usr=98.71%, sys=0.86%, ctx=41, majf=0, minf=9 00:41:17.412 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename0: (groupid=0, jobs=1): err= 0: pid=683784: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.1MiB/10021msec) 00:41:17.412 slat (nsec): min=5613, max=90315, avg=24693.06, stdev=14696.96 00:41:17.412 clat (usec): min=19226, max=42988, avg=23797.72, stdev=1441.01 00:41:17.412 lat (usec): min=19233, max=43005, avg=23822.42, stdev=1440.14 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25035], 99.50th=[33424], 99.90th=[42730], 99.95th=[42730], 00:41:17.412 | 99.99th=[42730] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2668.05, stdev=47.34, samples=19 00:41:17.412 iops : min= 640, max= 672, avg=667.00, stdev=11.86, samples=19 00:41:17.412 lat (msec) : 20=0.03%, 50=99.97% 00:41:17.412 cpu : usr=98.92%, sys=0.79%, ctx=11, majf=0, minf=9 00:41:17.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename1: (groupid=0, jobs=1): err= 0: pid=683785: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10002msec) 00:41:17.412 slat (nsec): min=5208, max=75972, avg=13042.81, stdev=10227.92 00:41:17.412 clat (usec): min=9428, max=32854, avg=23765.47, stdev=1117.60 00:41:17.412 lat (usec): min=9436, max=32860, avg=23778.51, stdev=1118.05 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[19268], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.412 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:17.412 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25297], 99.50th=[29492], 99.90th=[31327], 99.95th=[32375], 00:41:17.412 | 99.99th=[32900] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2681.26, stdev=51.80, samples=19 00:41:17.412 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:41:17.412 lat (msec) : 10=0.03%, 20=1.16%, 50=98.81% 00:41:17.412 cpu : usr=98.14%, sys=1.26%, ctx=148, majf=0, minf=9 00:41:17.412 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename1: (groupid=0, jobs=1): err= 0: pid=683786: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.1MiB/10022msec) 00:41:17.412 slat (nsec): min=5569, max=95088, avg=25035.82, stdev=16343.99 00:41:17.412 clat (usec): min=22175, max=43201, avg=23781.87, stdev=1453.63 00:41:17.412 lat (usec): min=22181, max=43211, avg=23806.91, stdev=1453.09 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23200], 00:41:17.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.412 | 99.00th=[25035], 99.50th=[33424], 99.90th=[43254], 99.95th=[43254], 00:41:17.412 | 99.99th=[43254] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.79, stdev=47.95, samples=19 00:41:17.412 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:41:17.412 lat (msec) : 50=100.00% 00:41:17.412 cpu : usr=99.15%, sys=0.57%, ctx=12, majf=0, minf=9 00:41:17.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename1: (groupid=0, jobs=1): err= 0: pid=683787: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=682, BW=2732KiB/s (2797kB/s)(26.8MiB/10045msec) 00:41:17.412 slat (nsec): min=4463, max=88420, avg=16890.43, stdev=13479.65 00:41:17.412 clat (usec): min=9050, max=46362, avg=23267.06, stdev=3742.17 00:41:17.412 lat (usec): min=9061, max=46371, avg=23283.96, stdev=3743.03 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[13304], 5.00th=[16450], 10.00th=[18744], 20.00th=[21365], 00:41:17.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.412 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[29492], 00:41:17.412 | 99.00th=[35914], 99.50th=[37487], 99.90th=[42730], 99.95th=[46400], 00:41:17.412 | 99.99th=[46400] 00:41:17.412 bw ( KiB/s): min= 2560, max= 2896, per=4.25%, avg=2737.60, stdev=80.74, samples=20 00:41:17.412 iops : min= 640, max= 724, avg=684.40, stdev=20.18, samples=20 00:41:17.412 lat (msec) : 10=0.26%, 20=16.37%, 50=83.37% 00:41:17.412 cpu : usr=97.33%, sys=1.74%, ctx=820, majf=0, minf=9 00:41:17.412 IO depths : 1=2.0%, 2=4.0%, 4=10.5%, 8=71.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:41:17.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 complete : 0=0.0%, 4=90.6%, 8=5.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.412 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.412 filename1: (groupid=0, jobs=1): err= 0: pid=683788: Tue Oct 8 17:57:08 2024 00:41:17.412 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10007msec) 00:41:17.412 slat (nsec): min=5565, max=96312, avg=22601.14, stdev=14466.28 00:41:17.412 clat (usec): min=10612, max=34540, avg=23702.16, stdev=1177.39 00:41:17.412 lat (usec): min=10621, max=34547, avg=23724.76, stdev=1176.01 00:41:17.412 clat percentiles (usec): 00:41:17.412 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.413 | 99.00th=[25035], 99.50th=[26608], 99.90th=[32637], 99.95th=[34341], 00:41:17.413 | 99.99th=[34341] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2675.20, stdev=57.24, samples=20 00:41:17.413 iops : min= 640, max= 704, avg=668.80, stdev=14.31, samples=20 00:41:17.413 lat (msec) : 20=1.10%, 50=98.90% 00:41:17.413 cpu : usr=98.81%, sys=0.89%, ctx=15, majf=0, minf=9 00:41:17.413 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename1: (groupid=0, jobs=1): err= 0: pid=683789: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=712, BW=2851KiB/s (2919kB/s)(27.9MiB/10022msec) 00:41:17.413 slat (nsec): min=5553, max=84223, avg=16110.27, stdev=11833.69 00:41:17.413 clat (usec): min=9241, max=58644, avg=22311.44, stdev=3860.71 00:41:17.413 lat (usec): min=9251, max=58650, avg=22327.55, stdev=3863.43 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[12911], 5.00th=[13829], 10.00th=[16712], 20.00th=[19006], 00:41:17.413 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:17.413 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.413 | 99.00th=[31065], 99.50th=[35390], 99.90th=[55313], 99.95th=[58459], 00:41:17.413 | 99.99th=[58459] 00:41:17.413 bw ( KiB/s): min= 2549, max= 3728, per=4.45%, avg=2868.47, stdev=323.25, samples=19 00:41:17.413 iops : min= 637, max= 932, avg=717.11, stdev=80.83, samples=19 00:41:17.413 lat (msec) : 10=0.17%, 20=21.02%, 50=78.68%, 100=0.14% 00:41:17.413 cpu : usr=99.02%, sys=0.70%, ctx=13, majf=0, minf=9 00:41:17.413 IO depths : 1=2.7%, 2=6.2%, 4=15.5%, 8=64.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=91.9%, 8=3.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=7142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename1: (groupid=0, jobs=1): err= 0: pid=683790: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10011msec) 00:41:17.413 slat (nsec): min=5552, max=70022, avg=16810.97, stdev=10815.96 00:41:17.413 clat (usec): min=11034, max=46475, avg=23861.87, stdev=1241.63 00:41:17.413 lat (usec): min=11043, max=46485, avg=23878.68, stdev=1240.93 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.413 | 99.00th=[25035], 99.50th=[28967], 99.90th=[46400], 99.95th=[46400], 00:41:17.413 | 99.99th=[46400] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2668.05, stdev=63.73, samples=19 00:41:17.413 iops : min= 640, max= 704, avg=667.00, stdev=15.95, samples=19 00:41:17.413 lat (msec) : 20=0.03%, 50=99.97% 00:41:17.413 cpu : usr=98.91%, sys=0.78%, ctx=13, majf=0, minf=9 00:41:17.413 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename1: (groupid=0, jobs=1): err= 0: pid=683791: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=682, BW=2728KiB/s (2794kB/s)(26.7MiB/10023msec) 00:41:17.413 slat (nsec): min=5403, max=85386, avg=13343.16, stdev=10961.26 00:41:17.413 clat (usec): min=10479, max=49536, avg=23384.65, stdev=3882.24 00:41:17.413 lat (usec): min=10502, max=49550, avg=23397.99, stdev=3882.79 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[13829], 5.00th=[16450], 10.00th=[18220], 20.00th=[21103], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27132], 95.00th=[29754], 00:41:17.413 | 99.00th=[35914], 99.50th=[40109], 99.90th=[46400], 99.95th=[49546], 00:41:17.413 | 99.99th=[49546] 00:41:17.413 bw ( KiB/s): min= 2496, max= 3008, per=4.22%, avg=2721.68, stdev=110.71, samples=19 00:41:17.413 iops : min= 624, max= 752, avg=680.42, stdev=27.68, samples=19 00:41:17.413 lat (msec) : 20=17.12%, 50=82.88% 00:41:17.413 cpu : usr=98.68%, sys=0.95%, ctx=71, majf=0, minf=9 00:41:17.413 IO depths : 1=0.5%, 2=1.1%, 4=4.5%, 8=78.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=89.4%, 8=8.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename1: (groupid=0, jobs=1): err= 0: pid=683792: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10016msec) 00:41:17.413 slat (nsec): min=5536, max=87808, avg=18040.89, stdev=15719.15 00:41:17.413 clat (usec): min=11918, max=36378, avg=23815.48, stdev=1157.21 00:41:17.413 lat (usec): min=11924, max=36393, avg=23833.52, stdev=1155.59 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:41:17.413 | 99.00th=[25035], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:41:17.413 | 99.99th=[36439] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2668.80, stdev=62.64, samples=20 00:41:17.413 iops : min= 640, max= 704, avg=667.20, stdev=15.66, samples=20 00:41:17.413 lat (msec) : 20=0.51%, 50=99.49% 00:41:17.413 cpu : usr=98.91%, sys=0.80%, ctx=10, majf=0, minf=9 00:41:17.413 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename2: (groupid=0, jobs=1): err= 0: pid=683793: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=659, BW=2638KiB/s (2702kB/s)(25.8MiB/10019msec) 00:41:17.413 slat (nsec): min=5565, max=82018, avg=22188.76, stdev=12896.95 00:41:17.413 clat (usec): min=22638, max=55461, avg=24050.70, stdev=2160.55 00:41:17.413 lat (usec): min=22660, max=55468, avg=24072.89, stdev=2159.65 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:41:17.413 | 99.00th=[32900], 99.50th=[34341], 99.90th=[55313], 99.95th=[55313], 00:41:17.413 | 99.99th=[55313] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2647.84, stdev=60.74, samples=19 00:41:17.413 iops : min= 640, max= 672, avg=661.95, stdev=15.20, samples=19 00:41:17.413 lat (msec) : 50=99.76%, 100=0.24% 00:41:17.413 cpu : usr=98.97%, sys=0.74%, ctx=15, majf=0, minf=9 00:41:17.413 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename2: (groupid=0, jobs=1): err= 0: pid=683794: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.2MiB/10040msec) 00:41:17.413 slat (nsec): min=5260, max=79989, avg=11844.92, stdev=10691.94 00:41:17.413 clat (usec): min=10813, max=46379, avg=23844.08, stdev=1656.63 00:41:17.413 lat (usec): min=10821, max=46386, avg=23855.92, stdev=1656.03 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[19792], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.413 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:17.413 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.413 | 99.00th=[25035], 99.50th=[31327], 99.90th=[46400], 99.95th=[46400], 00:41:17.413 | 99.99th=[46400] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2736, per=4.16%, avg=2677.60, stdev=41.62, samples=20 00:41:17.413 iops : min= 640, max= 684, avg=669.40, stdev=10.40, samples=20 00:41:17.413 lat (msec) : 20=1.13%, 50=98.87% 00:41:17.413 cpu : usr=98.76%, sys=0.95%, ctx=17, majf=0, minf=9 00:41:17.413 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename2: (groupid=0, jobs=1): err= 0: pid=683795: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.2MiB/10019msec) 00:41:17.413 slat (nsec): min=5480, max=92645, avg=19134.50, stdev=15024.09 00:41:17.413 clat (usec): min=10322, max=33646, avg=23696.05, stdev=1225.57 00:41:17.413 lat (usec): min=10331, max=33666, avg=23715.18, stdev=1224.46 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.413 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.413 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24511], 00:41:17.413 | 99.00th=[25035], 99.50th=[25297], 99.90th=[33817], 99.95th=[33817], 00:41:17.413 | 99.99th=[33817] 00:41:17.413 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2681.60, stdev=62.16, samples=20 00:41:17.413 iops : min= 640, max= 704, avg=670.40, stdev=15.54, samples=20 00:41:17.413 lat (msec) : 20=1.13%, 50=98.87% 00:41:17.413 cpu : usr=98.84%, sys=0.87%, ctx=13, majf=0, minf=9 00:41:17.413 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.413 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.413 filename2: (groupid=0, jobs=1): err= 0: pid=683796: Tue Oct 8 17:57:08 2024 00:41:17.413 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10007msec) 00:41:17.413 slat (nsec): min=5590, max=90496, avg=14156.16, stdev=11575.16 00:41:17.413 clat (usec): min=11656, max=25400, avg=23712.89, stdev=1079.26 00:41:17.413 lat (usec): min=11678, max=25407, avg=23727.04, stdev=1077.58 00:41:17.413 clat percentiles (usec): 00:41:17.413 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.414 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:17.414 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.414 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:41:17.414 | 99.99th=[25297] 00:41:17.414 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2681.60, stdev=65.33, samples=20 00:41:17.414 iops : min= 640, max= 704, avg=670.40, stdev=16.33, samples=20 00:41:17.414 lat (msec) : 20=1.19%, 50=98.81% 00:41:17.414 cpu : usr=98.80%, sys=0.92%, ctx=11, majf=0, minf=9 00:41:17.414 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:17.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.414 filename2: (groupid=0, jobs=1): err= 0: pid=683797: Tue Oct 8 17:57:08 2024 00:41:17.414 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10034msec) 00:41:17.414 slat (nsec): min=5549, max=63705, avg=12218.81, stdev=9657.70 00:41:17.414 clat (usec): min=15708, max=55228, avg=23889.72, stdev=1684.25 00:41:17.414 lat (usec): min=15714, max=55235, avg=23901.94, stdev=1684.35 00:41:17.414 clat percentiles (usec): 00:41:17.414 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.414 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:17.414 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.414 | 99.00th=[25035], 99.50th=[27395], 99.90th=[55313], 99.95th=[55313], 00:41:17.414 | 99.99th=[55313] 00:41:17.414 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2674.53, stdev=40.36, samples=19 00:41:17.414 iops : min= 640, max= 672, avg=668.63, stdev=10.09, samples=19 00:41:17.414 lat (msec) : 20=0.24%, 50=99.52%, 100=0.24% 00:41:17.414 cpu : usr=98.71%, sys=0.83%, ctx=98, majf=0, minf=9 00:41:17.414 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:17.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.414 filename2: (groupid=0, jobs=1): err= 0: pid=683798: Tue Oct 8 17:57:08 2024 00:41:17.414 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.1MiB/10022msec) 00:41:17.414 slat (nsec): min=5567, max=68675, avg=18348.45, stdev=10848.90 00:41:17.414 clat (usec): min=15992, max=42726, avg=23879.15, stdev=1435.10 00:41:17.414 lat (usec): min=15998, max=42734, avg=23897.50, stdev=1434.32 00:41:17.414 clat percentiles (usec): 00:41:17.414 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:17.414 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.414 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.414 | 99.00th=[25035], 99.50th=[33424], 99.90th=[42730], 99.95th=[42730], 00:41:17.414 | 99.99th=[42730] 00:41:17.414 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.79, stdev=47.95, samples=19 00:41:17.414 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:41:17.414 lat (msec) : 20=0.03%, 50=99.97% 00:41:17.414 cpu : usr=98.02%, sys=1.38%, ctx=135, majf=0, minf=9 00:41:17.414 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:17.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.414 filename2: (groupid=0, jobs=1): err= 0: pid=683799: Tue Oct 8 17:57:08 2024 00:41:17.414 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10021msec) 00:41:17.414 slat (nsec): min=5539, max=90313, avg=21809.27, stdev=15229.40 00:41:17.414 clat (usec): min=14640, max=57490, avg=23877.98, stdev=2002.63 00:41:17.414 lat (usec): min=14646, max=57497, avg=23899.79, stdev=2001.61 00:41:17.414 clat percentiles (usec): 00:41:17.414 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.414 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:17.414 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:41:17.414 | 99.00th=[28705], 99.50th=[36439], 99.90th=[57410], 99.95th=[57410], 00:41:17.414 | 99.99th=[57410] 00:41:17.414 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2666.37, stdev=40.33, samples=19 00:41:17.414 iops : min= 640, max= 672, avg=666.58, stdev=10.12, samples=19 00:41:17.414 lat (msec) : 20=0.66%, 50=99.13%, 100=0.21% 00:41:17.414 cpu : usr=98.90%, sys=0.79%, ctx=62, majf=0, minf=9 00:41:17.414 IO depths : 1=3.9%, 2=8.0%, 4=16.7%, 8=60.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:41:17.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 complete : 0=0.0%, 4=92.4%, 8=3.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.414 filename2: (groupid=0, jobs=1): err= 0: pid=683800: Tue Oct 8 17:57:08 2024 00:41:17.414 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10007msec) 00:41:17.414 slat (nsec): min=5443, max=96409, avg=23686.74, stdev=15528.22 00:41:17.414 clat (usec): min=10293, max=25458, avg=23634.27, stdev=1084.34 00:41:17.414 lat (usec): min=10301, max=25465, avg=23657.95, stdev=1084.17 00:41:17.414 clat percentiles (usec): 00:41:17.414 | 1.00th=[16450], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:17.414 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:17.414 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:17.414 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:41:17.414 | 99.99th=[25560] 00:41:17.414 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2681.60, stdev=65.33, samples=20 00:41:17.414 iops : min= 640, max= 704, avg=670.40, stdev=16.33, samples=20 00:41:17.414 lat (msec) : 20=1.19%, 50=98.81% 00:41:17.414 cpu : usr=98.95%, sys=0.71%, ctx=92, majf=0, minf=9 00:41:17.414 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:17.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.414 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.414 00:41:17.414 Run status group 0 (all jobs): 00:41:17.414 READ: bw=62.9MiB/s (65.9MB/s), 2638KiB/s-2851KiB/s (2702kB/s-2919kB/s), io=632MiB (662MB), run=10002-10045msec 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.414 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.414 bdev_null0 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 [2024-10-08 17:57:08.399750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 bdev_null1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:17.415 { 00:41:17.415 "params": { 00:41:17.415 "name": "Nvme$subsystem", 00:41:17.415 "trtype": "$TEST_TRANSPORT", 00:41:17.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.415 "adrfam": "ipv4", 00:41:17.415 "trsvcid": "$NVMF_PORT", 00:41:17.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.415 "hdgst": ${hdgst:-false}, 00:41:17.415 "ddgst": ${ddgst:-false} 00:41:17.415 }, 00:41:17.415 "method": "bdev_nvme_attach_controller" 00:41:17.415 } 00:41:17.415 EOF 00:41:17.415 )") 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:17.415 { 00:41:17.415 "params": { 00:41:17.415 "name": "Nvme$subsystem", 00:41:17.415 "trtype": "$TEST_TRANSPORT", 00:41:17.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.415 "adrfam": "ipv4", 00:41:17.415 "trsvcid": "$NVMF_PORT", 00:41:17.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.415 "hdgst": ${hdgst:-false}, 00:41:17.415 "ddgst": ${ddgst:-false} 00:41:17.415 }, 00:41:17.415 "method": "bdev_nvme_attach_controller" 00:41:17.415 } 00:41:17.415 EOF 00:41:17.415 )") 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:17.415 "params": { 00:41:17.415 "name": "Nvme0", 00:41:17.415 "trtype": "tcp", 00:41:17.415 "traddr": "10.0.0.2", 00:41:17.415 "adrfam": "ipv4", 00:41:17.415 "trsvcid": "4420", 00:41:17.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:17.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:17.415 "hdgst": false, 00:41:17.415 "ddgst": false 00:41:17.415 }, 00:41:17.415 "method": "bdev_nvme_attach_controller" 00:41:17.415 },{ 00:41:17.415 "params": { 00:41:17.415 "name": "Nvme1", 00:41:17.415 "trtype": "tcp", 00:41:17.415 "traddr": "10.0.0.2", 00:41:17.415 "adrfam": "ipv4", 00:41:17.415 "trsvcid": "4420", 00:41:17.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:17.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:17.415 "hdgst": false, 00:41:17.415 "ddgst": false 00:41:17.415 }, 00:41:17.415 "method": "bdev_nvme_attach_controller" 00:41:17.415 }' 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:17.415 17:57:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.415 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:17.415 ... 00:41:17.415 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:17.415 ... 00:41:17.415 fio-3.35 00:41:17.415 Starting 4 threads 00:41:22.701 00:41:22.701 filename0: (groupid=0, jobs=1): err= 0: pid=686563: Tue Oct 8 17:57:14 2024 00:41:22.701 read: IOPS=3081, BW=24.1MiB/s (25.2MB/s)(120MiB/5001msec) 00:41:22.701 slat (nsec): min=5380, max=35652, avg=5866.56, stdev=1161.46 00:41:22.701 clat (usec): min=1223, max=4679, avg=2580.50, stdev=344.06 00:41:22.701 lat (usec): min=1229, max=4686, avg=2586.36, stdev=344.07 00:41:22.701 clat percentiles (usec): 00:41:22.701 | 1.00th=[ 1860], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2245], 00:41:22.701 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2671], 00:41:22.701 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 3228], 00:41:22.701 | 99.00th=[ 3687], 99.50th=[ 3884], 99.90th=[ 4047], 99.95th=[ 4293], 00:41:22.701 | 99.99th=[ 4686] 00:41:22.701 bw ( KiB/s): min=23920, max=25248, per=26.17%, avg=24689.78, stdev=404.25, samples=9 00:41:22.701 iops : min= 2990, max= 3156, avg=3086.22, stdev=50.53, samples=9 00:41:22.701 lat (msec) : 2=2.30%, 4=97.34%, 10=0.36% 00:41:22.701 cpu : usr=95.38%, sys=3.64%, ctx=242, majf=0, minf=49 00:41:22.701 IO depths : 1=0.1%, 2=0.8%, 4=70.2%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 issued rwts: total=15411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.701 filename0: (groupid=0, jobs=1): err= 0: pid=686564: Tue Oct 8 17:57:14 2024 00:41:22.701 read: IOPS=2873, BW=22.4MiB/s (23.5MB/s)(112MiB/5002msec) 00:41:22.701 slat (nsec): min=5380, max=74959, avg=6034.63, stdev=1800.45 00:41:22.701 clat (usec): min=1128, max=5464, avg=2768.13, stdev=303.18 00:41:22.701 lat (usec): min=1134, max=5491, avg=2774.16, stdev=303.23 00:41:22.701 clat percentiles (usec): 00:41:22.701 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:41:22.701 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:41:22.701 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3294], 00:41:22.701 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 4686], 00:41:22.701 | 99.99th=[ 5342] 00:41:22.701 bw ( KiB/s): min=22544, max=23520, per=24.35%, avg=22970.67, stdev=322.29, samples=9 00:41:22.701 iops : min= 2818, max= 2940, avg=2871.33, stdev=40.29, samples=9 00:41:22.701 lat (msec) : 2=0.13%, 4=98.07%, 10=1.80% 00:41:22.701 cpu : usr=93.64%, sys=4.86%, ctx=218, majf=0, minf=42 00:41:22.701 IO depths : 1=0.1%, 2=0.1%, 4=71.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 issued rwts: total=14371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.701 filename1: (groupid=0, jobs=1): err= 0: pid=686565: Tue Oct 8 17:57:14 2024 00:41:22.701 read: IOPS=2878, BW=22.5MiB/s (23.6MB/s)(112MiB/5001msec) 00:41:22.701 slat (nsec): min=7848, max=36612, avg=8650.49, stdev=2007.03 00:41:22.701 clat (usec): min=965, max=5014, avg=2758.60, stdev=292.83 00:41:22.701 lat (usec): min=973, max=5023, avg=2767.25, stdev=292.84 00:41:22.701 clat percentiles (usec): 00:41:22.701 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:41:22.701 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:41:22.701 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3195], 00:41:22.701 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4752], 00:41:22.701 | 99.99th=[ 5014] 00:41:22.701 bw ( KiB/s): min=22396, max=23312, per=24.36%, avg=22980.89, stdev=305.90, samples=9 00:41:22.701 iops : min= 2799, max= 2914, avg=2872.56, stdev=38.36, samples=9 00:41:22.701 lat (usec) : 1000=0.03% 00:41:22.701 lat (msec) : 2=0.17%, 4=98.37%, 10=1.42% 00:41:22.701 cpu : usr=95.60%, sys=3.94%, ctx=126, majf=0, minf=60 00:41:22.701 IO depths : 1=0.1%, 2=0.1%, 4=68.0%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.701 issued rwts: total=14394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.701 filename1: (groupid=0, jobs=1): err= 0: pid=686567: Tue Oct 8 17:57:14 2024 00:41:22.701 read: IOPS=2961, BW=23.1MiB/s (24.3MB/s)(116MiB/5002msec) 00:41:22.701 slat (nsec): min=5385, max=75082, avg=8421.20, stdev=2329.59 00:41:22.701 clat (usec): min=984, max=4422, avg=2679.70, stdev=228.78 00:41:22.701 lat (usec): min=1001, max=4430, avg=2688.12, stdev=228.66 00:41:22.702 clat percentiles (usec): 00:41:22.702 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2573], 00:41:22.702 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:41:22.702 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2900], 95.00th=[ 2999], 00:41:22.702 | 99.00th=[ 3359], 99.50th=[ 3621], 99.90th=[ 4047], 99.95th=[ 4113], 00:41:22.702 | 99.99th=[ 4424] 00:41:22.702 bw ( KiB/s): min=23440, max=24400, per=25.14%, avg=23722.67, stdev=295.24, samples=9 00:41:22.702 iops : min= 2930, max= 3050, avg=2965.33, stdev=36.91, samples=9 00:41:22.702 lat (usec) : 1000=0.01% 00:41:22.702 lat (msec) : 2=0.83%, 4=99.05%, 10=0.11% 00:41:22.702 cpu : usr=96.56%, sys=3.20%, ctx=7, majf=0, minf=33 00:41:22.702 IO depths : 1=0.1%, 2=0.2%, 4=70.8%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.702 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.702 issued rwts: total=14813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.702 00:41:22.702 Run status group 0 (all jobs): 00:41:22.702 READ: bw=92.1MiB/s (96.6MB/s), 22.4MiB/s-24.1MiB/s (23.5MB/s-25.2MB/s), io=461MiB (483MB), run=5001-5002msec 00:41:22.962 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:22.962 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:22.962 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.962 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:22.962 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 00:41:22.963 real 0m24.744s 00:41:22.963 user 5m15.222s 00:41:22.963 sys 0m4.859s 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 ************************************ 00:41:22.963 END TEST fio_dif_rand_params 00:41:22.963 ************************************ 00:41:22.963 17:57:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:22.963 17:57:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:22.963 17:57:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 ************************************ 00:41:22.963 START TEST fio_dif_digest 00:41:22.963 ************************************ 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 bdev_null0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.963 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.963 [2024-10-08 17:57:14.952497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:23.224 { 00:41:23.224 "params": { 00:41:23.224 "name": "Nvme$subsystem", 00:41:23.224 "trtype": "$TEST_TRANSPORT", 00:41:23.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:23.224 "adrfam": "ipv4", 00:41:23.224 "trsvcid": "$NVMF_PORT", 00:41:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:23.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:23.224 "hdgst": ${hdgst:-false}, 00:41:23.224 "ddgst": ${ddgst:-false} 00:41:23.224 }, 00:41:23.224 "method": "bdev_nvme_attach_controller" 00:41:23.224 } 00:41:23.224 EOF 00:41:23.224 )") 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:23.224 17:57:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:23.224 "params": { 00:41:23.224 "name": "Nvme0", 00:41:23.224 "trtype": "tcp", 00:41:23.224 "traddr": "10.0.0.2", 00:41:23.224 "adrfam": "ipv4", 00:41:23.224 "trsvcid": "4420", 00:41:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:23.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:23.224 "hdgst": true, 00:41:23.224 "ddgst": true 00:41:23.224 }, 00:41:23.224 "method": "bdev_nvme_attach_controller" 00:41:23.224 }' 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:23.224 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:23.225 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:23.225 17:57:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.485 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:23.485 ... 00:41:23.485 fio-3.35 00:41:23.485 Starting 3 threads 00:41:35.713 00:41:35.713 filename0: (groupid=0, jobs=1): err= 0: pid=688046: Tue Oct 8 17:57:25 2024 00:41:35.713 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(397MiB/10007msec) 00:41:35.713 slat (nsec): min=5745, max=33051, avg=7180.75, stdev=1594.86 00:41:35.713 clat (usec): min=5710, max=52434, avg=9437.97, stdev=1662.11 00:41:35.713 lat (usec): min=5717, max=52440, avg=9445.15, stdev=1662.27 00:41:35.713 clat percentiles (usec): 00:41:35.713 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 8291], 20.00th=[ 8717], 00:41:35.713 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:41:35.713 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:41:35.713 | 99.00th=[13173], 99.50th=[13698], 99.90th=[15008], 99.95th=[52167], 00:41:35.713 | 99.99th=[52691] 00:41:35.713 bw ( KiB/s): min=31232, max=42240, per=35.21%, avg=40652.80, stdev=2418.10, samples=20 00:41:35.713 iops : min= 244, max= 330, avg=317.60, stdev=18.89, samples=20 00:41:35.713 lat (msec) : 10=79.86%, 20=20.04%, 100=0.09% 00:41:35.713 cpu : usr=93.88%, sys=5.87%, ctx=22, majf=0, minf=148 00:41:35.713 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 issued rwts: total=3178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.713 filename0: (groupid=0, jobs=1): err= 0: pid=688047: Tue Oct 8 17:57:25 2024 00:41:35.713 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(367MiB/10046msec) 00:41:35.713 slat (nsec): min=5747, max=30895, avg=7168.57, stdev=1539.31 00:41:35.713 clat (usec): min=6240, max=51955, avg=10245.13, stdev=1969.49 00:41:35.713 lat (usec): min=6249, max=51964, avg=10252.30, stdev=1969.63 00:41:35.713 clat percentiles (usec): 00:41:35.713 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:41:35.713 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:41:35.713 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:41:35.713 | 99.00th=[13829], 99.50th=[14222], 99.90th=[51119], 99.95th=[51643], 00:41:35.713 | 99.99th=[52167] 00:41:35.713 bw ( KiB/s): min=29696, max=39936, per=32.52%, avg=37542.40, stdev=2137.46, samples=20 00:41:35.713 iops : min= 232, max= 312, avg=293.30, stdev=16.70, samples=20 00:41:35.713 lat (msec) : 10=44.09%, 20=55.74%, 50=0.07%, 100=0.10% 00:41:35.713 cpu : usr=93.47%, sys=6.27%, ctx=18, majf=0, minf=138 00:41:35.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 issued rwts: total=2935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.713 filename0: (groupid=0, jobs=1): err= 0: pid=688048: Tue Oct 8 17:57:25 2024 00:41:35.713 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10047msec) 00:41:35.713 slat (nsec): min=5776, max=32737, avg=7208.11, stdev=1546.31 00:41:35.713 clat (usec): min=6782, max=52814, avg=10197.09, stdev=3024.96 00:41:35.713 lat (usec): min=6788, max=52821, avg=10204.30, stdev=3025.05 00:41:35.713 clat percentiles (usec): 00:41:35.713 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9241], 00:41:35.713 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:41:35.713 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11863], 00:41:35.713 | 99.00th=[13698], 99.50th=[14746], 99.90th=[51643], 99.95th=[52167], 00:41:35.713 | 99.99th=[52691] 00:41:35.713 bw ( KiB/s): min=30464, max=41216, per=32.67%, avg=37721.60, stdev=2230.64, samples=20 00:41:35.713 iops : min= 238, max= 322, avg=294.70, stdev=17.43, samples=20 00:41:35.713 lat (msec) : 10=53.31%, 20=46.22%, 50=0.03%, 100=0.44% 00:41:35.713 cpu : usr=93.11%, sys=6.63%, ctx=18, majf=0, minf=131 00:41:35.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:35.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.713 issued rwts: total=2949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:35.713 00:41:35.713 Run status group 0 (all jobs): 00:41:35.713 READ: bw=113MiB/s (118MB/s), 36.5MiB/s-39.7MiB/s (38.3MB/s-41.6MB/s), io=1133MiB (1188MB), run=10007-10047msec 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.713 00:41:35.713 real 0m11.231s 00:41:35.713 user 0m44.539s 00:41:35.713 sys 0m2.280s 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:35.713 17:57:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:35.713 ************************************ 00:41:35.713 END TEST fio_dif_digest 00:41:35.713 ************************************ 00:41:35.713 17:57:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:35.713 17:57:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:35.713 rmmod nvme_tcp 00:41:35.713 rmmod nvme_fabrics 00:41:35.713 rmmod nvme_keyring 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 677120 ']' 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 677120 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 677120 ']' 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 677120 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 677120 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 677120' 00:41:35.713 killing process with pid 677120 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@969 -- # kill 677120 00:41:35.713 17:57:26 nvmf_dif -- common/autotest_common.sh@974 -- # wait 677120 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:41:35.713 17:57:26 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:38.259 Waiting for block devices as requested 00:41:38.259 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:38.259 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:38.259 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:38.259 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:38.259 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:38.520 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:38.520 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:38.520 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:38.781 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:38.781 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:39.042 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:39.042 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:39.042 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:39.303 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:39.303 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:39.303 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:39.564 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:39.825 17:57:31 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:39.825 17:57:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:39.825 17:57:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.737 17:57:33 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:41.737 00:41:41.737 real 1m19.223s 00:41:41.737 user 7m51.755s 00:41:41.737 sys 0m23.271s 00:41:41.737 17:57:33 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:41.737 17:57:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:41.737 ************************************ 00:41:41.737 END TEST nvmf_dif 00:41:41.737 ************************************ 00:41:41.998 17:57:33 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:41.998 17:57:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:41.998 17:57:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:41.998 17:57:33 -- common/autotest_common.sh@10 -- # set +x 00:41:41.998 ************************************ 00:41:41.998 START TEST nvmf_abort_qd_sizes 00:41:41.998 ************************************ 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:41.998 * Looking for test storage... 00:41:41.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:41.998 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:42.260 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:42.260 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:42.260 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:42.261 17:57:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.261 --rc genhtml_branch_coverage=1 00:41:42.261 --rc genhtml_function_coverage=1 00:41:42.261 --rc genhtml_legend=1 00:41:42.261 --rc geninfo_all_blocks=1 00:41:42.261 --rc geninfo_unexecuted_blocks=1 00:41:42.261 00:41:42.261 ' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.261 --rc genhtml_branch_coverage=1 00:41:42.261 --rc genhtml_function_coverage=1 00:41:42.261 --rc genhtml_legend=1 00:41:42.261 --rc geninfo_all_blocks=1 00:41:42.261 --rc geninfo_unexecuted_blocks=1 00:41:42.261 00:41:42.261 ' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.261 --rc genhtml_branch_coverage=1 00:41:42.261 --rc genhtml_function_coverage=1 00:41:42.261 --rc genhtml_legend=1 00:41:42.261 --rc geninfo_all_blocks=1 00:41:42.261 --rc geninfo_unexecuted_blocks=1 00:41:42.261 00:41:42.261 ' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.261 --rc genhtml_branch_coverage=1 00:41:42.261 --rc genhtml_function_coverage=1 00:41:42.261 --rc genhtml_legend=1 00:41:42.261 --rc geninfo_all_blocks=1 00:41:42.261 --rc geninfo_unexecuted_blocks=1 00:41:42.261 00:41:42.261 ' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:42.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:42.261 17:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.403 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:50.404 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:50.404 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:50.404 Found net devices under 0000:31:00.0: cvl_0_0 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:50.404 Found net devices under 0000:31:00.1: cvl_0_1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:50.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:50.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:41:50.404 00:41:50.404 --- 10.0.0.2 ping statistics --- 00:41:50.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.404 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:50.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:50.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:41:50.404 00:41:50.404 --- 10.0.0.1 ping statistics --- 00:41:50.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.404 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:41:50.404 17:57:41 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:53.703 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:53.703 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=697639 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 697639 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 697639 ']' 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:53.964 17:57:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:53.964 [2024-10-08 17:57:45.923003] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:41:53.964 [2024-10-08 17:57:45.923067] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.224 [2024-10-08 17:57:46.015268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:54.225 [2024-10-08 17:57:46.111198] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:54.225 [2024-10-08 17:57:46.111262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:54.225 [2024-10-08 17:57:46.111271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:54.225 [2024-10-08 17:57:46.111278] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:54.225 [2024-10-08 17:57:46.111285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:54.225 [2024-10-08 17:57:46.113454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:54.225 [2024-10-08 17:57:46.113615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:41:54.225 [2024-10-08 17:57:46.113774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:41:54.225 [2024-10-08 17:57:46.113775] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:54.795 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:55.056 17:57:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:55.056 ************************************ 00:41:55.056 START TEST spdk_target_abort 00:41:55.056 ************************************ 00:41:55.056 17:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:41:55.056 17:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:55.056 17:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:55.056 17:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.056 17:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.317 spdk_targetn1 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.317 [2024-10-08 17:57:47.147794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.317 [2024-10-08 17:57:47.188072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.317 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:55.318 17:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.578 [2024-10-08 17:57:47.376997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:904 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.377024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0073 p:1 m:0 dnr:0 00:41:55.578 [2024-10-08 17:57:47.407458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1896 len:8 PRP1 0x2000078be000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.407475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:41:55.578 [2024-10-08 17:57:47.423480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2448 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.423497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:55.578 [2024-10-08 17:57:47.423553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2456 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.423560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:55.578 [2024-10-08 17:57:47.455432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3568 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c2 p:0 m:0 dnr:0 00:41:55.578 [2024-10-08 17:57:47.463378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3880 len:8 PRP1 0x2000078be000 PRP2 0x0 00:41:55.578 [2024-10-08 17:57:47.463393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00e7 p:0 m:0 dnr:0 00:41:58.879 Initializing NVMe Controllers 00:41:58.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:58.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:58.879 Initialization complete. Launching workers. 00:41:58.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13017, failed: 6 00:41:58.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3367, failed to submit 9656 00:41:58.879 success 781, unsuccessful 2586, failed 0 00:41:58.879 17:57:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:58.879 17:57:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.879 [2024-10-08 17:57:50.648107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:312 len:8 PRP1 0x200007c58000 PRP2 0x0 00:41:58.879 [2024-10-08 17:57:50.648149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:41:58.879 [2024-10-08 17:57:50.779072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3312 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:41:58.879 [2024-10-08 17:57:50.779100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00aa p:0 m:0 dnr:0 00:42:01.421 [2024-10-08 17:57:53.348867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:61952 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:42:01.421 [2024-10-08 17:57:53.348895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0046 p:1 m:0 dnr:0 00:42:01.992 Initializing NVMe Controllers 00:42:01.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:01.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:01.992 Initialization complete. Launching workers. 00:42:01.992 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8531, failed: 3 00:42:01.992 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 7321 00:42:01.992 success 363, unsuccessful 850, failed 0 00:42:01.992 17:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.992 17:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:02.252 [2024-10-08 17:57:54.027062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:1880 len:8 PRP1 0x200007900000 PRP2 0x0 00:42:02.252 [2024-10-08 17:57:54.027088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:00d1 p:0 m:0 dnr:0 00:42:03.635 [2024-10-08 17:57:55.564767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:185 nsid:1 lba:181728 len:8 PRP1 0x20000791a000 PRP2 0x0 00:42:03.635 [2024-10-08 17:57:55.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:185 cdw0:0 sqhd:009e p:0 m:0 dnr:0 00:42:05.544 Initializing NVMe Controllers 00:42:05.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:05.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:05.544 Initialization complete. Launching workers. 00:42:05.544 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43863, failed: 2 00:42:05.544 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2838, failed to submit 41027 00:42:05.544 success 586, unsuccessful 2252, failed 0 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.544 17:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 697639 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 697639 ']' 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 697639 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697639 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697639' 00:42:07.462 killing process with pid 697639 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 697639 00:42:07.462 17:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 697639 00:42:07.462 00:42:07.462 real 0m12.285s 00:42:07.462 user 0m49.797s 00:42:07.462 sys 0m2.068s 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:07.462 ************************************ 00:42:07.462 END TEST spdk_target_abort 00:42:07.462 ************************************ 00:42:07.462 17:57:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:07.462 17:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:07.462 17:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:07.462 17:57:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:07.462 ************************************ 00:42:07.462 START TEST kernel_target_abort 00:42:07.462 ************************************ 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:07.462 17:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:10.762 Waiting for block devices as requested 00:42:10.762 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:11.023 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:11.023 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:11.023 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:11.283 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:11.283 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:11.283 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:11.283 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:11.544 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:11.803 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:11.803 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:11.803 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:12.064 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:12.064 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:12.064 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:12.064 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:12.324 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:12.585 No valid GPT data, bailing 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:12.585 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:42:12.846 00:42:12.846 Discovery Log Number of Records 2, Generation counter 2 00:42:12.846 =====Discovery Log Entry 0====== 00:42:12.846 trtype: tcp 00:42:12.846 adrfam: ipv4 00:42:12.846 subtype: current discovery subsystem 00:42:12.846 treq: not specified, sq flow control disable supported 00:42:12.846 portid: 1 00:42:12.846 trsvcid: 4420 00:42:12.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:12.846 traddr: 10.0.0.1 00:42:12.846 eflags: none 00:42:12.846 sectype: none 00:42:12.846 =====Discovery Log Entry 1====== 00:42:12.846 trtype: tcp 00:42:12.846 adrfam: ipv4 00:42:12.846 subtype: nvme subsystem 00:42:12.846 treq: not specified, sq flow control disable supported 00:42:12.846 portid: 1 00:42:12.846 trsvcid: 4420 00:42:12.846 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:12.846 traddr: 10.0.0.1 00:42:12.846 eflags: none 00:42:12.846 sectype: none 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:12.846 17:58:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:16.145 Initializing NVMe Controllers 00:42:16.145 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:16.145 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:16.145 Initialization complete. Launching workers. 00:42:16.145 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67339, failed: 0 00:42:16.145 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67339, failed to submit 0 00:42:16.145 success 0, unsuccessful 67339, failed 0 00:42:16.145 17:58:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:16.145 17:58:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:19.451 Initializing NVMe Controllers 00:42:19.451 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:19.451 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:19.451 Initialization complete. Launching workers. 00:42:19.451 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114347, failed: 0 00:42:19.451 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28794, failed to submit 85553 00:42:19.451 success 0, unsuccessful 28794, failed 0 00:42:19.451 17:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:19.451 17:58:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:22.013 Initializing NVMe Controllers 00:42:22.013 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:22.013 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:22.013 Initialization complete. Launching workers. 00:42:22.013 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146225, failed: 0 00:42:22.013 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36590, failed to submit 109635 00:42:22.013 success 0, unsuccessful 36590, failed 0 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:22.013 17:58:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:22.273 17:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:25.569 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:25.569 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:25.830 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:27.742 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:28.004 00:42:28.004 real 0m20.659s 00:42:28.004 user 0m9.952s 00:42:28.004 sys 0m6.312s 00:42:28.004 17:58:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:28.004 17:58:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:28.004 ************************************ 00:42:28.004 END TEST kernel_target_abort 00:42:28.004 ************************************ 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:28.004 rmmod nvme_tcp 00:42:28.004 rmmod nvme_fabrics 00:42:28.004 rmmod nvme_keyring 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 697639 ']' 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 697639 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 697639 ']' 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 697639 00:42:28.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (697639) - No such process 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 697639 is not found' 00:42:28.004 Process with pid 697639 is not found 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:28.004 17:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:32.214 Waiting for block devices as requested 00:42:32.214 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:32.214 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:32.475 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:32.475 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:32.475 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:32.735 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:32.735 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:32.735 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:32.995 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:32.995 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:33.257 17:58:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.801 17:58:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:35.801 00:42:35.801 real 0m53.417s 00:42:35.801 user 1m5.410s 00:42:35.801 sys 0m19.786s 00:42:35.801 17:58:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:35.801 17:58:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:35.801 ************************************ 00:42:35.801 END TEST nvmf_abort_qd_sizes 00:42:35.801 ************************************ 00:42:35.801 17:58:27 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:35.801 17:58:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:35.801 17:58:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:35.801 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:42:35.801 ************************************ 00:42:35.801 START TEST keyring_file 00:42:35.801 ************************************ 00:42:35.801 17:58:27 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:35.801 * Looking for test storage... 00:42:35.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:35.801 17:58:27 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:35.801 17:58:27 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:42:35.801 17:58:27 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:35.801 17:58:27 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:35.801 17:58:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:35.802 17:58:27 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:35.802 17:58:27 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:35.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.802 --rc genhtml_branch_coverage=1 00:42:35.802 --rc genhtml_function_coverage=1 00:42:35.802 --rc genhtml_legend=1 00:42:35.802 --rc geninfo_all_blocks=1 00:42:35.802 --rc geninfo_unexecuted_blocks=1 00:42:35.802 00:42:35.802 ' 00:42:35.802 17:58:27 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:35.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.802 --rc genhtml_branch_coverage=1 00:42:35.802 --rc genhtml_function_coverage=1 00:42:35.802 --rc genhtml_legend=1 00:42:35.802 --rc geninfo_all_blocks=1 00:42:35.802 --rc geninfo_unexecuted_blocks=1 00:42:35.802 00:42:35.802 ' 00:42:35.802 17:58:27 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:35.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.802 --rc genhtml_branch_coverage=1 00:42:35.802 --rc genhtml_function_coverage=1 00:42:35.802 --rc genhtml_legend=1 00:42:35.802 --rc geninfo_all_blocks=1 00:42:35.802 --rc geninfo_unexecuted_blocks=1 00:42:35.802 00:42:35.802 ' 00:42:35.802 17:58:27 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:35.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.802 --rc genhtml_branch_coverage=1 00:42:35.802 --rc genhtml_function_coverage=1 00:42:35.802 --rc genhtml_legend=1 00:42:35.802 --rc geninfo_all_blocks=1 00:42:35.802 --rc geninfo_unexecuted_blocks=1 00:42:35.802 00:42:35.802 ' 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:35.802 17:58:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:35.802 17:58:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.802 17:58:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.802 17:58:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.802 17:58:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:35.802 17:58:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:35.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3xKNQMusc4 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:35.802 17:58:27 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3xKNQMusc4 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3xKNQMusc4 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3xKNQMusc4 00:42:35.802 17:58:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sLsLpP3zS4 00:42:35.802 17:58:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:35.803 17:58:27 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:35.803 17:58:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sLsLpP3zS4 00:42:35.803 17:58:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sLsLpP3zS4 00:42:35.803 17:58:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sLsLpP3zS4 00:42:35.803 17:58:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=708211 00:42:35.803 17:58:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 708211 00:42:35.803 17:58:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 708211 ']' 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:35.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:35.803 17:58:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:35.803 [2024-10-08 17:58:27.697719] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:42:35.803 [2024-10-08 17:58:27.697776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708211 ] 00:42:35.803 [2024-10-08 17:58:27.752103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:36.063 [2024-10-08 17:58:27.807921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:36.063 17:58:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:36.063 17:58:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:36.063 17:58:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:36.063 17:58:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.063 17:58:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:36.063 [2024-10-08 17:58:27.992542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:36.063 null0 00:42:36.063 [2024-10-08 17:58:28.024592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:36.063 [2024-10-08 17:58:28.024982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.063 17:58:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.063 17:58:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:36.324 [2024-10-08 17:58:28.056667] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:36.324 request: 00:42:36.324 { 00:42:36.324 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:36.324 "secure_channel": false, 00:42:36.324 "listen_address": { 00:42:36.324 "trtype": "tcp", 00:42:36.324 "traddr": "127.0.0.1", 00:42:36.324 "trsvcid": "4420" 00:42:36.324 }, 00:42:36.324 "method": "nvmf_subsystem_add_listener", 00:42:36.324 "req_id": 1 00:42:36.324 } 00:42:36.324 Got JSON-RPC error response 00:42:36.324 response: 00:42:36.324 { 00:42:36.324 "code": -32602, 00:42:36.324 "message": "Invalid parameters" 00:42:36.324 } 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:36.324 17:58:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=708217 00:42:36.324 17:58:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 708217 /var/tmp/bperf.sock 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 708217 ']' 00:42:36.324 17:58:28 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:36.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:36.324 17:58:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:36.324 [2024-10-08 17:58:28.114515] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:42:36.324 [2024-10-08 17:58:28.114562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708217 ] 00:42:36.324 [2024-10-08 17:58:28.191370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:36.325 [2024-10-08 17:58:28.257051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:37.266 17:58:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:37.266 17:58:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:37.266 17:58:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:37.266 17:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:37.266 17:58:29 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sLsLpP3zS4 00:42:37.266 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sLsLpP3zS4 00:42:37.527 17:58:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:37.527 17:58:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:37.527 17:58:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3xKNQMusc4 == \/\t\m\p\/\t\m\p\.\3\x\K\N\Q\M\u\s\c\4 ]] 00:42:37.527 17:58:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:37.527 17:58:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.527 17:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:37.787 17:58:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sLsLpP3zS4 == \/\t\m\p\/\t\m\p\.\s\L\s\L\p\P\3\z\S\4 ]] 00:42:37.787 17:58:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:37.787 17:58:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:37.787 17:58:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.787 17:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.787 17:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:37.787 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.048 17:58:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:38.048 17:58:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:38.048 17:58:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:38.048 17:58:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.048 17:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.309 [2024-10-08 17:58:30.152346] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:38.309 nvme0n1 00:42:38.309 17:58:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:38.309 17:58:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:38.309 17:58:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.309 17:58:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.309 17:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.309 17:58:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:38.569 17:58:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:38.569 17:58:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:38.569 17:58:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:38.569 17:58:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.569 17:58:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.569 17:58:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:38.569 17:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.830 17:58:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:38.830 17:58:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:38.830 Running I/O for 1 seconds... 00:42:39.773 16703.00 IOPS, 65.25 MiB/s 00:42:39.773 Latency(us) 00:42:39.773 [2024-10-08T15:58:31.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.773 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:39.773 nvme0n1 : 1.00 16764.15 65.48 0.00 0.00 7620.94 2812.59 18459.31 00:42:39.773 [2024-10-08T15:58:31.765Z] =================================================================================================================== 00:42:39.773 [2024-10-08T15:58:31.765Z] Total : 16764.15 65.48 0.00 0.00 7620.94 2812.59 18459.31 00:42:39.773 { 00:42:39.773 "results": [ 00:42:39.773 { 00:42:39.773 "job": "nvme0n1", 00:42:39.773 "core_mask": "0x2", 00:42:39.773 "workload": "randrw", 00:42:39.773 "percentage": 50, 00:42:39.773 "status": "finished", 00:42:39.773 "queue_depth": 128, 00:42:39.773 "io_size": 4096, 00:42:39.773 "runtime": 1.004107, 00:42:39.773 "iops": 16764.149637439037, 00:42:39.773 "mibps": 65.48495952124624, 00:42:39.773 "io_failed": 0, 00:42:39.773 "io_timeout": 0, 00:42:39.773 "avg_latency_us": 7620.939782569952, 00:42:39.773 "min_latency_us": 2812.5866666666666, 00:42:39.773 "max_latency_us": 18459.306666666667 00:42:39.773 } 00:42:39.773 ], 00:42:39.773 "core_count": 1 00:42:39.773 } 00:42:39.773 17:58:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:39.773 17:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:40.032 17:58:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:40.032 17:58:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.032 17:58:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.032 17:58:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.032 17:58:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.032 17:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.292 17:58:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:40.292 17:58:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.293 17:58:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:40.293 17:58:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:40.293 17:58:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.293 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.553 [2024-10-08 17:58:32.437745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:40.553 [2024-10-08 17:58:32.437941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752a80 (107): Transport endpoint is not connected 00:42:40.553 [2024-10-08 17:58:32.438937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752a80 (9): Bad file descriptor 00:42:40.553 [2024-10-08 17:58:32.439938] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:40.553 [2024-10-08 17:58:32.439947] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:40.553 [2024-10-08 17:58:32.439953] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:40.553 [2024-10-08 17:58:32.439960] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:40.553 request: 00:42:40.553 { 00:42:40.553 "name": "nvme0", 00:42:40.553 "trtype": "tcp", 00:42:40.553 "traddr": "127.0.0.1", 00:42:40.553 "adrfam": "ipv4", 00:42:40.553 "trsvcid": "4420", 00:42:40.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:40.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:40.553 "prchk_reftag": false, 00:42:40.553 "prchk_guard": false, 00:42:40.553 "hdgst": false, 00:42:40.553 "ddgst": false, 00:42:40.553 "psk": "key1", 00:42:40.553 "allow_unrecognized_csi": false, 00:42:40.553 "method": "bdev_nvme_attach_controller", 00:42:40.553 "req_id": 1 00:42:40.553 } 00:42:40.553 Got JSON-RPC error response 00:42:40.553 response: 00:42:40.553 { 00:42:40.553 "code": -5, 00:42:40.553 "message": "Input/output error" 00:42:40.553 } 00:42:40.553 17:58:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:40.553 17:58:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:40.553 17:58:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:40.553 17:58:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:40.553 17:58:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:40.553 17:58:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.553 17:58:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.554 17:58:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.554 17:58:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.554 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.814 17:58:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:40.814 17:58:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:40.814 17:58:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.814 17:58:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.814 17:58:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.814 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.814 17:58:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:41.075 17:58:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:41.075 17:58:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:41.075 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:41.075 17:58:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:41.075 17:58:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:41.335 17:58:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:41.336 17:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:41.336 17:58:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:41.596 17:58:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:41.596 17:58:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.596 [2024-10-08 17:58:33.497690] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3xKNQMusc4': 0100660 00:42:41.596 [2024-10-08 17:58:33.497710] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:41.596 request: 00:42:41.596 { 00:42:41.596 "name": "key0", 00:42:41.596 "path": "/tmp/tmp.3xKNQMusc4", 00:42:41.596 "method": "keyring_file_add_key", 00:42:41.596 "req_id": 1 00:42:41.596 } 00:42:41.596 Got JSON-RPC error response 00:42:41.596 response: 00:42:41.596 { 00:42:41.596 "code": -1, 00:42:41.596 "message": "Operation not permitted" 00:42:41.596 } 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:41.596 17:58:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:41.596 17:58:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.596 17:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3xKNQMusc4 00:42:41.856 17:58:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3xKNQMusc4 00:42:41.856 17:58:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:41.856 17:58:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:41.856 17:58:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:41.856 17:58:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:41.856 17:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:41.856 17:58:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:42.116 17:58:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:42.116 17:58:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.116 17:58:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:42.116 17:58:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.117 17:58:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:42.117 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.117 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:42.117 17:58:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.117 17:58:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.117 17:58:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.117 [2024-10-08 17:58:34.059119] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3xKNQMusc4': No such file or directory 00:42:42.117 [2024-10-08 17:58:34.059132] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:42.117 [2024-10-08 17:58:34.059146] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:42.117 [2024-10-08 17:58:34.059151] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:42.117 [2024-10-08 17:58:34.059157] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:42.117 [2024-10-08 17:58:34.059162] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:42.117 request: 00:42:42.117 { 00:42:42.117 "name": "nvme0", 00:42:42.117 "trtype": "tcp", 00:42:42.117 "traddr": "127.0.0.1", 00:42:42.117 "adrfam": "ipv4", 00:42:42.117 "trsvcid": "4420", 00:42:42.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.117 "prchk_reftag": false, 00:42:42.117 "prchk_guard": false, 00:42:42.117 "hdgst": false, 00:42:42.117 "ddgst": false, 00:42:42.117 "psk": "key0", 00:42:42.117 "allow_unrecognized_csi": false, 00:42:42.117 "method": "bdev_nvme_attach_controller", 00:42:42.117 "req_id": 1 00:42:42.117 } 00:42:42.117 Got JSON-RPC error response 00:42:42.117 response: 00:42:42.117 { 00:42:42.117 "code": -19, 00:42:42.117 "message": "No such device" 00:42:42.117 } 00:42:42.117 17:58:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:42.117 17:58:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:42.117 17:58:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:42.117 17:58:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:42.117 17:58:34 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:42.117 17:58:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:42.377 17:58:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hlg1ug1pi0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:42.377 17:58:34 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hlg1ug1pi0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hlg1ug1pi0 00:42:42.377 17:58:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.hlg1ug1pi0 00:42:42.377 17:58:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlg1ug1pi0 00:42:42.377 17:58:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hlg1ug1pi0 00:42:42.638 17:58:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.638 17:58:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.898 nvme0n1 00:42:42.898 17:58:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:42.898 17:58:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:42.898 17:58:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:42.898 17:58:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:42.898 17:58:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.898 17:58:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.158 17:58:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:43.158 17:58:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:43.158 17:58:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:43.158 17:58:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:43.158 17:58:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:43.158 17:58:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.158 17:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.158 17:58:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.418 17:58:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:43.418 17:58:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:43.418 17:58:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:43.418 17:58:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.418 17:58:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.418 17:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.418 17:58:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.678 17:58:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:43.678 17:58:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:43.678 17:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:43.678 17:58:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:43.678 17:58:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:43.678 17:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.939 17:58:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:43.939 17:58:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hlg1ug1pi0 00:42:43.939 17:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hlg1ug1pi0 00:42:44.199 17:58:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sLsLpP3zS4 00:42:44.199 17:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sLsLpP3zS4 00:42:44.460 17:58:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:44.460 17:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:44.460 nvme0n1 00:42:44.460 17:58:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:44.460 17:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:44.720 17:58:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:44.720 "subsystems": [ 00:42:44.720 { 00:42:44.720 "subsystem": "keyring", 00:42:44.720 "config": [ 00:42:44.720 { 00:42:44.720 "method": "keyring_file_add_key", 00:42:44.720 "params": { 00:42:44.720 "name": "key0", 00:42:44.721 "path": "/tmp/tmp.hlg1ug1pi0" 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "keyring_file_add_key", 00:42:44.721 "params": { 00:42:44.721 "name": "key1", 00:42:44.721 "path": "/tmp/tmp.sLsLpP3zS4" 00:42:44.721 } 00:42:44.721 } 00:42:44.721 ] 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "subsystem": "iobuf", 00:42:44.721 "config": [ 00:42:44.721 { 00:42:44.721 "method": "iobuf_set_options", 00:42:44.721 "params": { 00:42:44.721 "small_pool_count": 8192, 00:42:44.721 "large_pool_count": 1024, 00:42:44.721 "small_bufsize": 8192, 00:42:44.721 "large_bufsize": 135168 00:42:44.721 } 00:42:44.721 } 00:42:44.721 ] 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "subsystem": "sock", 00:42:44.721 "config": [ 00:42:44.721 { 00:42:44.721 "method": "sock_set_default_impl", 00:42:44.721 "params": { 00:42:44.721 "impl_name": "posix" 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "sock_impl_set_options", 00:42:44.721 "params": { 00:42:44.721 "impl_name": "ssl", 00:42:44.721 "recv_buf_size": 4096, 00:42:44.721 "send_buf_size": 4096, 00:42:44.721 "enable_recv_pipe": true, 00:42:44.721 "enable_quickack": false, 00:42:44.721 "enable_placement_id": 0, 00:42:44.721 "enable_zerocopy_send_server": true, 00:42:44.721 "enable_zerocopy_send_client": false, 00:42:44.721 "zerocopy_threshold": 0, 00:42:44.721 "tls_version": 0, 00:42:44.721 "enable_ktls": false 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "sock_impl_set_options", 00:42:44.721 "params": { 00:42:44.721 "impl_name": "posix", 00:42:44.721 "recv_buf_size": 2097152, 00:42:44.721 "send_buf_size": 2097152, 00:42:44.721 "enable_recv_pipe": true, 00:42:44.721 "enable_quickack": false, 00:42:44.721 "enable_placement_id": 0, 00:42:44.721 "enable_zerocopy_send_server": true, 00:42:44.721 "enable_zerocopy_send_client": false, 00:42:44.721 "zerocopy_threshold": 0, 00:42:44.721 "tls_version": 0, 00:42:44.721 "enable_ktls": false 00:42:44.721 } 00:42:44.721 } 00:42:44.721 ] 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "subsystem": "vmd", 00:42:44.721 "config": [] 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "subsystem": "accel", 00:42:44.721 "config": [ 00:42:44.721 { 00:42:44.721 "method": "accel_set_options", 00:42:44.721 "params": { 00:42:44.721 "small_cache_size": 128, 00:42:44.721 "large_cache_size": 16, 00:42:44.721 "task_count": 2048, 00:42:44.721 "sequence_count": 2048, 00:42:44.721 "buf_count": 2048 00:42:44.721 } 00:42:44.721 } 00:42:44.721 ] 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "subsystem": "bdev", 00:42:44.721 "config": [ 00:42:44.721 { 00:42:44.721 "method": "bdev_set_options", 00:42:44.721 "params": { 00:42:44.721 "bdev_io_pool_size": 65535, 00:42:44.721 "bdev_io_cache_size": 256, 00:42:44.721 "bdev_auto_examine": true, 00:42:44.721 "iobuf_small_cache_size": 128, 00:42:44.721 "iobuf_large_cache_size": 16 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "bdev_raid_set_options", 00:42:44.721 "params": { 00:42:44.721 "process_window_size_kb": 1024, 00:42:44.721 "process_max_bandwidth_mb_sec": 0 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "bdev_iscsi_set_options", 00:42:44.721 "params": { 00:42:44.721 "timeout_sec": 30 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "bdev_nvme_set_options", 00:42:44.721 "params": { 00:42:44.721 "action_on_timeout": "none", 00:42:44.721 "timeout_us": 0, 00:42:44.721 "timeout_admin_us": 0, 00:42:44.721 "keep_alive_timeout_ms": 10000, 00:42:44.721 "arbitration_burst": 0, 00:42:44.721 "low_priority_weight": 0, 00:42:44.721 "medium_priority_weight": 0, 00:42:44.721 "high_priority_weight": 0, 00:42:44.721 "nvme_adminq_poll_period_us": 10000, 00:42:44.721 "nvme_ioq_poll_period_us": 0, 00:42:44.721 "io_queue_requests": 512, 00:42:44.721 "delay_cmd_submit": true, 00:42:44.721 "transport_retry_count": 4, 00:42:44.721 "bdev_retry_count": 3, 00:42:44.721 "transport_ack_timeout": 0, 00:42:44.721 "ctrlr_loss_timeout_sec": 0, 00:42:44.721 "reconnect_delay_sec": 0, 00:42:44.721 "fast_io_fail_timeout_sec": 0, 00:42:44.721 "disable_auto_failback": false, 00:42:44.721 "generate_uuids": false, 00:42:44.721 "transport_tos": 0, 00:42:44.721 "nvme_error_stat": false, 00:42:44.721 "rdma_srq_size": 0, 00:42:44.721 "io_path_stat": false, 00:42:44.721 "allow_accel_sequence": false, 00:42:44.721 "rdma_max_cq_size": 0, 00:42:44.721 "rdma_cm_event_timeout_ms": 0, 00:42:44.721 "dhchap_digests": [ 00:42:44.721 "sha256", 00:42:44.721 "sha384", 00:42:44.721 "sha512" 00:42:44.721 ], 00:42:44.721 "dhchap_dhgroups": [ 00:42:44.721 "null", 00:42:44.721 "ffdhe2048", 00:42:44.721 "ffdhe3072", 00:42:44.721 "ffdhe4096", 00:42:44.721 "ffdhe6144", 00:42:44.721 "ffdhe8192" 00:42:44.721 ] 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "bdev_nvme_attach_controller", 00:42:44.721 "params": { 00:42:44.721 "name": "nvme0", 00:42:44.721 "trtype": "TCP", 00:42:44.721 "adrfam": "IPv4", 00:42:44.721 "traddr": "127.0.0.1", 00:42:44.721 "trsvcid": "4420", 00:42:44.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.721 "prchk_reftag": false, 00:42:44.721 "prchk_guard": false, 00:42:44.721 "ctrlr_loss_timeout_sec": 0, 00:42:44.721 "reconnect_delay_sec": 0, 00:42:44.721 "fast_io_fail_timeout_sec": 0, 00:42:44.721 "psk": "key0", 00:42:44.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.721 "hdgst": false, 00:42:44.721 "ddgst": false, 00:42:44.721 "multipath": "multipath" 00:42:44.721 } 00:42:44.721 }, 00:42:44.721 { 00:42:44.721 "method": "bdev_nvme_set_hotplug", 00:42:44.721 "params": { 00:42:44.721 "period_us": 100000, 00:42:44.721 "enable": false 00:42:44.721 } 00:42:44.722 }, 00:42:44.722 { 00:42:44.722 "method": "bdev_wait_for_examine" 00:42:44.722 } 00:42:44.722 ] 00:42:44.722 }, 00:42:44.722 { 00:42:44.722 "subsystem": "nbd", 00:42:44.722 "config": [] 00:42:44.722 } 00:42:44.722 ] 00:42:44.722 }' 00:42:44.722 17:58:36 keyring_file -- keyring/file.sh@115 -- # killprocess 708217 00:42:44.722 17:58:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 708217 ']' 00:42:44.722 17:58:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 708217 00:42:44.722 17:58:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:44.722 17:58:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:44.722 17:58:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708217 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708217' 00:42:44.983 killing process with pid 708217 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@969 -- # kill 708217 00:42:44.983 Received shutdown signal, test time was about 1.000000 seconds 00:42:44.983 00:42:44.983 Latency(us) 00:42:44.983 [2024-10-08T15:58:36.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:44.983 [2024-10-08T15:58:36.975Z] =================================================================================================================== 00:42:44.983 [2024-10-08T15:58:36.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@974 -- # wait 708217 00:42:44.983 17:58:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=710034 00:42:44.983 17:58:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 710034 /var/tmp/bperf.sock 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 710034 ']' 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:44.983 17:58:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:44.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:44.983 17:58:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:44.983 17:58:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:44.983 "subsystems": [ 00:42:44.983 { 00:42:44.983 "subsystem": "keyring", 00:42:44.983 "config": [ 00:42:44.983 { 00:42:44.983 "method": "keyring_file_add_key", 00:42:44.983 "params": { 00:42:44.983 "name": "key0", 00:42:44.983 "path": "/tmp/tmp.hlg1ug1pi0" 00:42:44.983 } 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "method": "keyring_file_add_key", 00:42:44.983 "params": { 00:42:44.983 "name": "key1", 00:42:44.983 "path": "/tmp/tmp.sLsLpP3zS4" 00:42:44.983 } 00:42:44.983 } 00:42:44.983 ] 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "subsystem": "iobuf", 00:42:44.983 "config": [ 00:42:44.983 { 00:42:44.983 "method": "iobuf_set_options", 00:42:44.983 "params": { 00:42:44.983 "small_pool_count": 8192, 00:42:44.983 "large_pool_count": 1024, 00:42:44.983 "small_bufsize": 8192, 00:42:44.983 "large_bufsize": 135168 00:42:44.983 } 00:42:44.983 } 00:42:44.983 ] 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "subsystem": "sock", 00:42:44.983 "config": [ 00:42:44.983 { 00:42:44.983 "method": "sock_set_default_impl", 00:42:44.983 "params": { 00:42:44.983 "impl_name": "posix" 00:42:44.983 } 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "method": "sock_impl_set_options", 00:42:44.983 "params": { 00:42:44.983 "impl_name": "ssl", 00:42:44.983 "recv_buf_size": 4096, 00:42:44.983 "send_buf_size": 4096, 00:42:44.983 "enable_recv_pipe": true, 00:42:44.983 "enable_quickack": false, 00:42:44.983 "enable_placement_id": 0, 00:42:44.983 "enable_zerocopy_send_server": true, 00:42:44.983 "enable_zerocopy_send_client": false, 00:42:44.983 "zerocopy_threshold": 0, 00:42:44.983 "tls_version": 0, 00:42:44.983 "enable_ktls": false 00:42:44.983 } 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "method": "sock_impl_set_options", 00:42:44.983 "params": { 00:42:44.983 "impl_name": "posix", 00:42:44.983 "recv_buf_size": 2097152, 00:42:44.983 "send_buf_size": 2097152, 00:42:44.983 "enable_recv_pipe": true, 00:42:44.983 "enable_quickack": false, 00:42:44.983 "enable_placement_id": 0, 00:42:44.983 "enable_zerocopy_send_server": true, 00:42:44.983 "enable_zerocopy_send_client": false, 00:42:44.983 "zerocopy_threshold": 0, 00:42:44.983 "tls_version": 0, 00:42:44.983 "enable_ktls": false 00:42:44.983 } 00:42:44.983 } 00:42:44.983 ] 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "subsystem": "vmd", 00:42:44.983 "config": [] 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "subsystem": "accel", 00:42:44.983 "config": [ 00:42:44.983 { 00:42:44.983 "method": "accel_set_options", 00:42:44.983 "params": { 00:42:44.983 "small_cache_size": 128, 00:42:44.983 "large_cache_size": 16, 00:42:44.983 "task_count": 2048, 00:42:44.983 "sequence_count": 2048, 00:42:44.983 "buf_count": 2048 00:42:44.983 } 00:42:44.983 } 00:42:44.983 ] 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "subsystem": "bdev", 00:42:44.983 "config": [ 00:42:44.983 { 00:42:44.983 "method": "bdev_set_options", 00:42:44.983 "params": { 00:42:44.983 "bdev_io_pool_size": 65535, 00:42:44.983 "bdev_io_cache_size": 256, 00:42:44.983 "bdev_auto_examine": true, 00:42:44.983 "iobuf_small_cache_size": 128, 00:42:44.983 "iobuf_large_cache_size": 16 00:42:44.983 } 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "method": "bdev_raid_set_options", 00:42:44.983 "params": { 00:42:44.983 "process_window_size_kb": 1024, 00:42:44.983 "process_max_bandwidth_mb_sec": 0 00:42:44.983 } 00:42:44.983 }, 00:42:44.983 { 00:42:44.983 "method": "bdev_iscsi_set_options", 00:42:44.984 "params": { 00:42:44.984 "timeout_sec": 30 00:42:44.984 } 00:42:44.984 }, 00:42:44.984 { 00:42:44.984 "method": "bdev_nvme_set_options", 00:42:44.984 "params": { 00:42:44.984 "action_on_timeout": "none", 00:42:44.984 "timeout_us": 0, 00:42:44.984 "timeout_admin_us": 0, 00:42:44.984 "keep_alive_timeout_ms": 10000, 00:42:44.984 "arbitration_burst": 0, 00:42:44.984 "low_priority_weight": 0, 00:42:44.984 "medium_priority_weight": 0, 00:42:44.984 "high_priority_weight": 0, 00:42:44.984 "nvme_adminq_poll_period_us": 10000, 00:42:44.984 "nvme_ioq_poll_period_us": 0, 00:42:44.984 "io_queue_requests": 512, 00:42:44.984 "delay_cmd_submit": true, 00:42:44.984 "transport_retry_count": 4, 00:42:44.984 "bdev_retry_count": 3, 00:42:44.984 "transport_ack_timeout": 0, 00:42:44.984 "ctrlr_loss_timeout_sec": 0, 00:42:44.984 "reconnect_delay_sec": 0, 00:42:44.984 "fast_io_fail_timeout_sec": 0, 00:42:44.984 "disable_auto_failback": false, 00:42:44.984 "generate_uuids": false, 00:42:44.984 "transport_tos": 0, 00:42:44.984 "nvme_error_stat": false, 00:42:44.984 "rdma_srq_size": 0, 00:42:44.984 "io_path_stat": false, 00:42:44.984 "allow_accel_sequence": false, 00:42:44.984 "rdma_max_cq_size": 0, 00:42:44.984 "rdma_cm_event_timeout_ms": 0, 00:42:44.984 "dhchap_digests": [ 00:42:44.984 "sha256", 00:42:44.984 "sha384", 00:42:44.984 "sha512" 00:42:44.984 ], 00:42:44.984 "dhchap_dhgroups": [ 00:42:44.984 "null", 00:42:44.984 "ffdhe2048", 00:42:44.984 "ffdhe3072", 00:42:44.984 "ffdhe4096", 00:42:44.984 "ffdhe6144", 00:42:44.984 "ffdhe8192" 00:42:44.984 ] 00:42:44.984 } 00:42:44.984 }, 00:42:44.984 { 00:42:44.984 "method": "bdev_nvme_attach_controller", 00:42:44.984 "params": { 00:42:44.984 "name": "nvme0", 00:42:44.984 "trtype": "TCP", 00:42:44.984 "adrfam": "IPv4", 00:42:44.984 "traddr": "127.0.0.1", 00:42:44.984 "trsvcid": "4420", 00:42:44.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.984 "prchk_reftag": false, 00:42:44.984 "prchk_guard": false, 00:42:44.984 "ctrlr_loss_timeout_sec": 0, 00:42:44.984 "reconnect_delay_sec": 0, 00:42:44.984 "fast_io_fail_timeout_sec": 0, 00:42:44.984 "psk": "key0", 00:42:44.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.984 "hdgst": false, 00:42:44.984 "ddgst": false, 00:42:44.984 "multipath": "multipath" 00:42:44.984 } 00:42:44.984 }, 00:42:44.984 { 00:42:44.984 "method": "bdev_nvme_set_hotplug", 00:42:44.984 "params": { 00:42:44.984 "period_us": 100000, 00:42:44.984 "enable": false 00:42:44.984 } 00:42:44.984 }, 00:42:44.984 { 00:42:44.984 "method": "bdev_wait_for_examine" 00:42:44.984 } 00:42:44.984 ] 00:42:44.984 }, 00:42:44.984 { 00:42:44.984 "subsystem": "nbd", 00:42:44.984 "config": [] 00:42:44.984 } 00:42:44.984 ] 00:42:44.984 }' 00:42:44.984 [2024-10-08 17:58:36.910048] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:42:44.984 [2024-10-08 17:58:36.910106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710034 ] 00:42:45.244 [2024-10-08 17:58:36.984731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.244 [2024-10-08 17:58:37.037250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:45.244 [2024-10-08 17:58:37.180107] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:45.814 17:58:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:45.814 17:58:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:45.814 17:58:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:45.814 17:58:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:45.814 17:58:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.075 17:58:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:46.075 17:58:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:46.075 17:58:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:46.075 17:58:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.075 17:58:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.075 17:58:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:46.075 17:58:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.075 17:58:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:46.075 17:58:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:46.075 17:58:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:46.075 17:58:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.075 17:58:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.075 17:58:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:46.075 17:58:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.335 17:58:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:46.335 17:58:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:46.335 17:58:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:46.335 17:58:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:46.594 17:58:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:46.594 17:58:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:46.594 17:58:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hlg1ug1pi0 /tmp/tmp.sLsLpP3zS4 00:42:46.594 17:58:38 keyring_file -- keyring/file.sh@20 -- # killprocess 710034 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 710034 ']' 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 710034 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710034 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710034' 00:42:46.594 killing process with pid 710034 00:42:46.594 17:58:38 keyring_file -- common/autotest_common.sh@969 -- # kill 710034 00:42:46.594 Received shutdown signal, test time was about 1.000000 seconds 00:42:46.594 00:42:46.594 Latency(us) 00:42:46.594 [2024-10-08T15:58:38.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:46.595 [2024-10-08T15:58:38.587Z] =================================================================================================================== 00:42:46.595 [2024-10-08T15:58:38.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:46.595 17:58:38 keyring_file -- common/autotest_common.sh@974 -- # wait 710034 00:42:46.595 17:58:38 keyring_file -- keyring/file.sh@21 -- # killprocess 708211 00:42:46.595 17:58:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 708211 ']' 00:42:46.595 17:58:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 708211 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708211 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708211' 00:42:46.854 killing process with pid 708211 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@969 -- # kill 708211 00:42:46.854 17:58:38 keyring_file -- common/autotest_common.sh@974 -- # wait 708211 00:42:47.115 00:42:47.115 real 0m11.554s 00:42:47.115 user 0m28.590s 00:42:47.115 sys 0m2.601s 00:42:47.115 17:58:38 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:47.115 17:58:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.115 ************************************ 00:42:47.115 END TEST keyring_file 00:42:47.115 ************************************ 00:42:47.115 17:58:38 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:47.115 17:58:38 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:47.115 17:58:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:47.115 17:58:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:47.115 17:58:38 -- common/autotest_common.sh@10 -- # set +x 00:42:47.115 ************************************ 00:42:47.115 START TEST keyring_linux 00:42:47.115 ************************************ 00:42:47.115 17:58:38 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:47.115 Joined session keyring: 1045336623 00:42:47.115 * Looking for test storage... 00:42:47.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:47.115 17:58:39 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:47.115 17:58:39 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:42:47.115 17:58:39 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:47.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.376 --rc genhtml_branch_coverage=1 00:42:47.376 --rc genhtml_function_coverage=1 00:42:47.376 --rc genhtml_legend=1 00:42:47.376 --rc geninfo_all_blocks=1 00:42:47.376 --rc geninfo_unexecuted_blocks=1 00:42:47.376 00:42:47.376 ' 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:47.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.376 --rc genhtml_branch_coverage=1 00:42:47.376 --rc genhtml_function_coverage=1 00:42:47.376 --rc genhtml_legend=1 00:42:47.376 --rc geninfo_all_blocks=1 00:42:47.376 --rc geninfo_unexecuted_blocks=1 00:42:47.376 00:42:47.376 ' 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:47.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.376 --rc genhtml_branch_coverage=1 00:42:47.376 --rc genhtml_function_coverage=1 00:42:47.376 --rc genhtml_legend=1 00:42:47.376 --rc geninfo_all_blocks=1 00:42:47.376 --rc geninfo_unexecuted_blocks=1 00:42:47.376 00:42:47.376 ' 00:42:47.376 17:58:39 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:47.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.376 --rc genhtml_branch_coverage=1 00:42:47.376 --rc genhtml_function_coverage=1 00:42:47.376 --rc genhtml_legend=1 00:42:47.376 --rc geninfo_all_blocks=1 00:42:47.376 --rc geninfo_unexecuted_blocks=1 00:42:47.376 00:42:47.376 ' 00:42:47.376 17:58:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:47.376 17:58:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:47.376 17:58:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:47.376 17:58:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.376 17:58:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.376 17:58:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.376 17:58:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:47.376 17:58:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.376 17:58:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:47.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:47.377 /tmp/:spdk-test:key0 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:47.377 17:58:39 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:47.377 17:58:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:47.377 /tmp/:spdk-test:key1 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=710469 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 710469 00:42:47.377 17:58:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 710469 ']' 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:47.377 17:58:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:47.377 [2024-10-08 17:58:39.330096] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:42:47.377 [2024-10-08 17:58:39.330166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710469 ] 00:42:47.636 [2024-10-08 17:58:39.409076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.636 [2024-10-08 17:58:39.466001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:48.207 [2024-10-08 17:58:40.119909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:48.207 null0 00:42:48.207 [2024-10-08 17:58:40.151962] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:48.207 [2024-10-08 17:58:40.152309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:48.207 724322638 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:48.207 650611969 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=710688 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 710688 /var/tmp/bperf.sock 00:42:48.207 17:58:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 710688 ']' 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:48.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:48.207 17:58:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:48.467 [2024-10-08 17:58:40.228187] Starting SPDK v25.01-pre git sha1 52e9db722 / DPDK 24.03.0 initialization... 00:42:48.467 [2024-10-08 17:58:40.228237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710688 ] 00:42:48.467 [2024-10-08 17:58:40.304462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.467 [2024-10-08 17:58:40.358115] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:49.038 17:58:41 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:49.038 17:58:41 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:49.038 17:58:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:49.038 17:58:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:49.298 17:58:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:49.298 17:58:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:49.558 17:58:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:49.558 17:58:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:49.818 [2024-10-08 17:58:41.574075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:49.818 nvme0n1 00:42:49.818 17:58:41 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:49.818 17:58:41 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:49.818 17:58:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:49.818 17:58:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:49.818 17:58:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.818 17:58:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:50.078 17:58:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:50.078 17:58:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:50.078 17:58:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:50.078 17:58:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:50.079 17:58:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.079 17:58:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:50.079 17:58:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@25 -- # sn=724322638 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 724322638 == \7\2\4\3\2\2\6\3\8 ]] 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 724322638 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:50.079 17:58:42 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:50.339 Running I/O for 1 seconds... 00:42:51.279 24749.00 IOPS, 96.68 MiB/s 00:42:51.279 Latency(us) 00:42:51.279 [2024-10-08T15:58:43.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.279 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:51.279 nvme0n1 : 1.01 24749.10 96.68 0.00 0.00 5156.85 1802.24 6335.15 00:42:51.279 [2024-10-08T15:58:43.271Z] =================================================================================================================== 00:42:51.279 [2024-10-08T15:58:43.271Z] Total : 24749.10 96.68 0.00 0.00 5156.85 1802.24 6335.15 00:42:51.279 { 00:42:51.279 "results": [ 00:42:51.280 { 00:42:51.280 "job": "nvme0n1", 00:42:51.280 "core_mask": "0x2", 00:42:51.280 "workload": "randread", 00:42:51.280 "status": "finished", 00:42:51.280 "queue_depth": 128, 00:42:51.280 "io_size": 4096, 00:42:51.280 "runtime": 1.005168, 00:42:51.280 "iops": 24749.096668417616, 00:42:51.280 "mibps": 96.67615886100631, 00:42:51.280 "io_failed": 0, 00:42:51.280 "io_timeout": 0, 00:42:51.280 "avg_latency_us": 5156.849510793102, 00:42:51.280 "min_latency_us": 1802.24, 00:42:51.280 "max_latency_us": 6335.1466666666665 00:42:51.280 } 00:42:51.280 ], 00:42:51.280 "core_count": 1 00:42:51.280 } 00:42:51.280 17:58:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:51.280 17:58:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:51.540 17:58:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:51.540 17:58:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:51.540 17:58:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:51.540 17:58:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:51.541 17:58:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:51.541 17:58:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.541 17:58:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:51.541 17:58:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:51.541 17:58:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:51.541 17:58:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:51.541 17:58:43 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:51.541 17:58:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:51.801 [2024-10-08 17:58:43.661200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:51.801 [2024-10-08 17:58:43.661487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0830 (107): Transport endpoint is not connected 00:42:51.801 [2024-10-08 17:58:43.662483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0830 (9): Bad file descriptor 00:42:51.801 [2024-10-08 17:58:43.663485] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:51.801 [2024-10-08 17:58:43.663493] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:51.801 [2024-10-08 17:58:43.663499] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:51.801 [2024-10-08 17:58:43.663505] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:51.801 request: 00:42:51.801 { 00:42:51.801 "name": "nvme0", 00:42:51.801 "trtype": "tcp", 00:42:51.801 "traddr": "127.0.0.1", 00:42:51.801 "adrfam": "ipv4", 00:42:51.801 "trsvcid": "4420", 00:42:51.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:51.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:51.801 "prchk_reftag": false, 00:42:51.801 "prchk_guard": false, 00:42:51.801 "hdgst": false, 00:42:51.801 "ddgst": false, 00:42:51.801 "psk": ":spdk-test:key1", 00:42:51.801 "allow_unrecognized_csi": false, 00:42:51.801 "method": "bdev_nvme_attach_controller", 00:42:51.801 "req_id": 1 00:42:51.801 } 00:42:51.801 Got JSON-RPC error response 00:42:51.801 response: 00:42:51.801 { 00:42:51.801 "code": -5, 00:42:51.801 "message": "Input/output error" 00:42:51.801 } 00:42:51.801 17:58:43 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:51.801 17:58:43 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:51.801 17:58:43 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:51.801 17:58:43 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:51.801 17:58:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:51.801 17:58:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@33 -- # sn=724322638 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 724322638 00:42:51.802 1 links removed 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@33 -- # sn=650611969 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 650611969 00:42:51.802 1 links removed 00:42:51.802 17:58:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 710688 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 710688 ']' 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 710688 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710688 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710688' 00:42:51.802 killing process with pid 710688 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 710688 00:42:51.802 Received shutdown signal, test time was about 1.000000 seconds 00:42:51.802 00:42:51.802 Latency(us) 00:42:51.802 [2024-10-08T15:58:43.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.802 [2024-10-08T15:58:43.794Z] =================================================================================================================== 00:42:51.802 [2024-10-08T15:58:43.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:51.802 17:58:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 710688 00:42:52.062 17:58:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 710469 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 710469 ']' 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 710469 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710469 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710469' 00:42:52.063 killing process with pid 710469 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 710469 00:42:52.063 17:58:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 710469 00:42:52.324 00:42:52.324 real 0m5.213s 00:42:52.324 user 0m9.661s 00:42:52.324 sys 0m1.453s 00:42:52.324 17:58:44 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.324 17:58:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:52.324 ************************************ 00:42:52.324 END TEST keyring_linux 00:42:52.324 ************************************ 00:42:52.324 17:58:44 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:52.324 17:58:44 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:52.324 17:58:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:52.324 17:58:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:52.324 17:58:44 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:52.324 17:58:44 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:52.324 17:58:44 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:52.324 17:58:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:52.324 17:58:44 -- common/autotest_common.sh@10 -- # set +x 00:42:52.324 17:58:44 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:52.324 17:58:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:52.324 17:58:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:52.324 17:58:44 -- common/autotest_common.sh@10 -- # set +x 00:43:00.463 INFO: APP EXITING 00:43:00.463 INFO: killing all VMs 00:43:00.463 INFO: killing vhost app 00:43:00.463 INFO: EXIT DONE 00:43:03.008 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:03.008 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:03.270 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:03.270 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:03.530 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:03.530 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:03.530 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:03.530 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:07.737 Cleaning 00:43:07.737 Removing: /var/run/dpdk/spdk0/config 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:07.737 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:07.737 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:07.737 Removing: /var/run/dpdk/spdk1/config 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:07.737 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:07.737 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:07.737 Removing: /var/run/dpdk/spdk2/config 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:07.737 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:07.737 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:07.737 Removing: /var/run/dpdk/spdk3/config 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:07.737 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:07.737 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:07.737 Removing: /var/run/dpdk/spdk4/config 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:07.737 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:07.737 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:07.737 Removing: /dev/shm/bdev_svc_trace.1 00:43:07.737 Removing: /dev/shm/nvmf_trace.0 00:43:07.737 Removing: /dev/shm/spdk_tgt_trace.pid128171 00:43:07.737 Removing: /var/run/dpdk/spdk0 00:43:07.737 Removing: /var/run/dpdk/spdk1 00:43:07.737 Removing: /var/run/dpdk/spdk2 00:43:07.737 Removing: /var/run/dpdk/spdk3 00:43:07.737 Removing: /var/run/dpdk/spdk4 00:43:07.737 Removing: /var/run/dpdk/spdk_pid126355 00:43:07.737 Removing: /var/run/dpdk/spdk_pid128171 00:43:07.737 Removing: /var/run/dpdk/spdk_pid129137 00:43:07.737 Removing: /var/run/dpdk/spdk_pid130178 00:43:07.737 Removing: /var/run/dpdk/spdk_pid130516 00:43:07.737 Removing: /var/run/dpdk/spdk_pid131577 00:43:07.737 Removing: /var/run/dpdk/spdk_pid131821 00:43:07.737 Removing: /var/run/dpdk/spdk_pid132054 00:43:07.737 Removing: /var/run/dpdk/spdk_pid133189 00:43:07.737 Removing: /var/run/dpdk/spdk_pid133930 00:43:07.737 Removing: /var/run/dpdk/spdk_pid134283 00:43:07.737 Removing: /var/run/dpdk/spdk_pid134623 00:43:07.737 Removing: /var/run/dpdk/spdk_pid134992 00:43:07.737 Removing: /var/run/dpdk/spdk_pid135314 00:43:07.737 Removing: /var/run/dpdk/spdk_pid135625 00:43:07.737 Removing: /var/run/dpdk/spdk_pid135973 00:43:07.737 Removing: /var/run/dpdk/spdk_pid136366 00:43:07.737 Removing: /var/run/dpdk/spdk_pid137470 00:43:07.737 Removing: /var/run/dpdk/spdk_pid141037 00:43:07.737 Removing: /var/run/dpdk/spdk_pid141407 00:43:07.737 Removing: /var/run/dpdk/spdk_pid141763 00:43:07.737 Removing: /var/run/dpdk/spdk_pid142066 00:43:07.737 Removing: /var/run/dpdk/spdk_pid142469 00:43:07.737 Removing: /var/run/dpdk/spdk_pid142587 00:43:07.737 Removing: /var/run/dpdk/spdk_pid143174 00:43:07.737 Removing: /var/run/dpdk/spdk_pid143192 00:43:07.737 Removing: /var/run/dpdk/spdk_pid143559 00:43:07.737 Removing: /var/run/dpdk/spdk_pid143873 00:43:07.737 Removing: /var/run/dpdk/spdk_pid143935 00:43:07.737 Removing: /var/run/dpdk/spdk_pid144263 00:43:07.737 Removing: /var/run/dpdk/spdk_pid144718 00:43:07.737 Removing: /var/run/dpdk/spdk_pid145068 00:43:07.737 Removing: /var/run/dpdk/spdk_pid145464 00:43:07.737 Removing: /var/run/dpdk/spdk_pid150149 00:43:07.737 Removing: /var/run/dpdk/spdk_pid155510 00:43:07.737 Removing: /var/run/dpdk/spdk_pid167743 00:43:07.737 Removing: /var/run/dpdk/spdk_pid168554 00:43:07.737 Removing: /var/run/dpdk/spdk_pid173797 00:43:07.737 Removing: /var/run/dpdk/spdk_pid174298 00:43:07.737 Removing: /var/run/dpdk/spdk_pid180157 00:43:07.737 Removing: /var/run/dpdk/spdk_pid187321 00:43:07.737 Removing: /var/run/dpdk/spdk_pid190528 00:43:07.737 Removing: /var/run/dpdk/spdk_pid203406 00:43:07.737 Removing: /var/run/dpdk/spdk_pid214591 00:43:07.737 Removing: /var/run/dpdk/spdk_pid216766 00:43:07.737 Removing: /var/run/dpdk/spdk_pid217929 00:43:07.737 Removing: /var/run/dpdk/spdk_pid239572 00:43:07.737 Removing: /var/run/dpdk/spdk_pid244489 00:43:07.737 Removing: /var/run/dpdk/spdk_pid302476 00:43:07.737 Removing: /var/run/dpdk/spdk_pid308924 00:43:07.737 Removing: /var/run/dpdk/spdk_pid316158 00:43:07.737 Removing: /var/run/dpdk/spdk_pid323546 00:43:07.737 Removing: /var/run/dpdk/spdk_pid323616 00:43:07.737 Removing: /var/run/dpdk/spdk_pid324654 00:43:07.737 Removing: /var/run/dpdk/spdk_pid325686 00:43:07.737 Removing: /var/run/dpdk/spdk_pid326752 00:43:07.737 Removing: /var/run/dpdk/spdk_pid327362 00:43:07.737 Removing: /var/run/dpdk/spdk_pid327455 00:43:07.737 Removing: /var/run/dpdk/spdk_pid327700 00:43:07.737 Removing: /var/run/dpdk/spdk_pid327804 00:43:07.737 Removing: /var/run/dpdk/spdk_pid327812 00:43:07.738 Removing: /var/run/dpdk/spdk_pid328816 00:43:07.738 Removing: /var/run/dpdk/spdk_pid329818 00:43:07.738 Removing: /var/run/dpdk/spdk_pid330852 00:43:07.738 Removing: /var/run/dpdk/spdk_pid331498 00:43:07.738 Removing: /var/run/dpdk/spdk_pid331584 00:43:07.738 Removing: /var/run/dpdk/spdk_pid331839 00:43:07.738 Removing: /var/run/dpdk/spdk_pid333266 00:43:07.738 Removing: /var/run/dpdk/spdk_pid334643 00:43:07.738 Removing: /var/run/dpdk/spdk_pid344951 00:43:07.738 Removing: /var/run/dpdk/spdk_pid380563 00:43:07.738 Removing: /var/run/dpdk/spdk_pid386054 00:43:07.738 Removing: /var/run/dpdk/spdk_pid388040 00:43:07.738 Removing: /var/run/dpdk/spdk_pid390378 00:43:07.738 Removing: /var/run/dpdk/spdk_pid390721 00:43:07.738 Removing: /var/run/dpdk/spdk_pid390892 00:43:07.738 Removing: /var/run/dpdk/spdk_pid391098 00:43:07.738 Removing: /var/run/dpdk/spdk_pid391903 00:43:07.738 Removing: /var/run/dpdk/spdk_pid394149 00:43:07.738 Removing: /var/run/dpdk/spdk_pid395501 00:43:07.738 Removing: /var/run/dpdk/spdk_pid396021 00:43:07.738 Removing: /var/run/dpdk/spdk_pid398652 00:43:07.738 Removing: /var/run/dpdk/spdk_pid399512 00:43:07.738 Removing: /var/run/dpdk/spdk_pid400395 00:43:07.738 Removing: /var/run/dpdk/spdk_pid405525 00:43:07.738 Removing: /var/run/dpdk/spdk_pid412289 00:43:07.738 Removing: /var/run/dpdk/spdk_pid412290 00:43:07.738 Removing: /var/run/dpdk/spdk_pid412291 00:43:07.738 Removing: /var/run/dpdk/spdk_pid417049 00:43:07.738 Removing: /var/run/dpdk/spdk_pid428301 00:43:07.738 Removing: /var/run/dpdk/spdk_pid433140 00:43:07.738 Removing: /var/run/dpdk/spdk_pid440562 00:43:07.738 Removing: /var/run/dpdk/spdk_pid442112 00:43:07.738 Removing: /var/run/dpdk/spdk_pid443755 00:43:07.999 Removing: /var/run/dpdk/spdk_pid445596 00:43:07.999 Removing: /var/run/dpdk/spdk_pid451423 00:43:07.999 Removing: /var/run/dpdk/spdk_pid456518 00:43:07.999 Removing: /var/run/dpdk/spdk_pid465747 00:43:07.999 Removing: /var/run/dpdk/spdk_pid465781 00:43:07.999 Removing: /var/run/dpdk/spdk_pid470968 00:43:07.999 Removing: /var/run/dpdk/spdk_pid471201 00:43:07.999 Removing: /var/run/dpdk/spdk_pid471527 00:43:07.999 Removing: /var/run/dpdk/spdk_pid471918 00:43:07.999 Removing: /var/run/dpdk/spdk_pid472037 00:43:07.999 Removing: /var/run/dpdk/spdk_pid477643 00:43:07.999 Removing: /var/run/dpdk/spdk_pid478575 00:43:07.999 Removing: /var/run/dpdk/spdk_pid484422 00:43:07.999 Removing: /var/run/dpdk/spdk_pid487618 00:43:07.999 Removing: /var/run/dpdk/spdk_pid494391 00:43:07.999 Removing: /var/run/dpdk/spdk_pid501000 00:43:07.999 Removing: /var/run/dpdk/spdk_pid511340 00:43:07.999 Removing: /var/run/dpdk/spdk_pid520119 00:43:07.999 Removing: /var/run/dpdk/spdk_pid520122 00:43:07.999 Removing: /var/run/dpdk/spdk_pid544457 00:43:07.999 Removing: /var/run/dpdk/spdk_pid545152 00:43:07.999 Removing: /var/run/dpdk/spdk_pid546041 00:43:07.999 Removing: /var/run/dpdk/spdk_pid546832 00:43:07.999 Removing: /var/run/dpdk/spdk_pid547889 00:43:07.999 Removing: /var/run/dpdk/spdk_pid548577 00:43:07.999 Removing: /var/run/dpdk/spdk_pid549263 00:43:07.999 Removing: /var/run/dpdk/spdk_pid549991 00:43:07.999 Removing: /var/run/dpdk/spdk_pid555385 00:43:08.000 Removing: /var/run/dpdk/spdk_pid555710 00:43:08.000 Removing: /var/run/dpdk/spdk_pid562868 00:43:08.000 Removing: /var/run/dpdk/spdk_pid563207 00:43:08.000 Removing: /var/run/dpdk/spdk_pid569732 00:43:08.000 Removing: /var/run/dpdk/spdk_pid575042 00:43:08.000 Removing: /var/run/dpdk/spdk_pid587331 00:43:08.000 Removing: /var/run/dpdk/spdk_pid588069 00:43:08.000 Removing: /var/run/dpdk/spdk_pid593185 00:43:08.000 Removing: /var/run/dpdk/spdk_pid593536 00:43:08.000 Removing: /var/run/dpdk/spdk_pid598646 00:43:08.000 Removing: /var/run/dpdk/spdk_pid605732 00:43:08.000 Removing: /var/run/dpdk/spdk_pid608725 00:43:08.000 Removing: /var/run/dpdk/spdk_pid621134 00:43:08.000 Removing: /var/run/dpdk/spdk_pid631947 00:43:08.000 Removing: /var/run/dpdk/spdk_pid634051 00:43:08.000 Removing: /var/run/dpdk/spdk_pid635496 00:43:08.000 Removing: /var/run/dpdk/spdk_pid655307 00:43:08.000 Removing: /var/run/dpdk/spdk_pid660153 00:43:08.000 Removing: /var/run/dpdk/spdk_pid663521 00:43:08.000 Removing: /var/run/dpdk/spdk_pid671133 00:43:08.000 Removing: /var/run/dpdk/spdk_pid671268 00:43:08.000 Removing: /var/run/dpdk/spdk_pid677383 00:43:08.000 Removing: /var/run/dpdk/spdk_pid679596 00:43:08.000 Removing: /var/run/dpdk/spdk_pid682112 00:43:08.000 Removing: /var/run/dpdk/spdk_pid683310 00:43:08.000 Removing: /var/run/dpdk/spdk_pid686107 00:43:08.000 Removing: /var/run/dpdk/spdk_pid687841 00:43:08.000 Removing: /var/run/dpdk/spdk_pid697971 00:43:08.000 Removing: /var/run/dpdk/spdk_pid698633 00:43:08.000 Removing: /var/run/dpdk/spdk_pid699295 00:43:08.000 Removing: /var/run/dpdk/spdk_pid702255 00:43:08.000 Removing: /var/run/dpdk/spdk_pid702686 00:43:08.000 Removing: /var/run/dpdk/spdk_pid703287 00:43:08.000 Removing: /var/run/dpdk/spdk_pid708211 00:43:08.261 Removing: /var/run/dpdk/spdk_pid708217 00:43:08.261 Removing: /var/run/dpdk/spdk_pid710034 00:43:08.261 Removing: /var/run/dpdk/spdk_pid710469 00:43:08.261 Removing: /var/run/dpdk/spdk_pid710688 00:43:08.261 Clean 00:43:08.261 17:59:00 -- common/autotest_common.sh@1451 -- # return 0 00:43:08.261 17:59:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:08.261 17:59:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:08.261 17:59:00 -- common/autotest_common.sh@10 -- # set +x 00:43:08.261 17:59:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:08.261 17:59:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:08.261 17:59:00 -- common/autotest_common.sh@10 -- # set +x 00:43:08.261 17:59:00 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:08.261 17:59:00 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:08.261 17:59:00 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:08.261 17:59:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:08.261 17:59:00 -- spdk/autotest.sh@394 -- # hostname 00:43:08.261 17:59:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:08.522 geninfo: WARNING: invalid characters removed from testname! 00:43:35.101 17:59:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:36.481 17:59:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:38.389 17:59:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:40.299 17:59:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:42.841 17:59:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:44.224 17:59:36 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:46.159 17:59:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:46.159 17:59:37 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:43:46.159 17:59:37 -- common/autotest_common.sh@1681 -- $ lcov --version 00:43:46.159 17:59:37 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:43:46.159 17:59:37 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:43:46.159 17:59:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:46.159 17:59:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:46.159 17:59:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:46.159 17:59:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:46.159 17:59:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:46.159 17:59:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:46.159 17:59:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:46.159 17:59:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:46.159 17:59:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:46.159 17:59:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:46.159 17:59:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:46.159 17:59:37 -- scripts/common.sh@344 -- $ case "$op" in 00:43:46.159 17:59:37 -- scripts/common.sh@345 -- $ : 1 00:43:46.159 17:59:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:46.159 17:59:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:46.159 17:59:37 -- scripts/common.sh@365 -- $ decimal 1 00:43:46.159 17:59:37 -- scripts/common.sh@353 -- $ local d=1 00:43:46.159 17:59:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:46.159 17:59:37 -- scripts/common.sh@355 -- $ echo 1 00:43:46.159 17:59:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:46.159 17:59:37 -- scripts/common.sh@366 -- $ decimal 2 00:43:46.159 17:59:37 -- scripts/common.sh@353 -- $ local d=2 00:43:46.159 17:59:37 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:46.159 17:59:37 -- scripts/common.sh@355 -- $ echo 2 00:43:46.159 17:59:37 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:46.159 17:59:37 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:46.159 17:59:37 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:46.159 17:59:37 -- scripts/common.sh@368 -- $ return 0 00:43:46.159 17:59:37 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:46.159 17:59:37 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:43:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:46.159 --rc genhtml_branch_coverage=1 00:43:46.159 --rc genhtml_function_coverage=1 00:43:46.159 --rc genhtml_legend=1 00:43:46.159 --rc geninfo_all_blocks=1 00:43:46.159 --rc geninfo_unexecuted_blocks=1 00:43:46.159 00:43:46.159 ' 00:43:46.159 17:59:37 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:43:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:46.159 --rc genhtml_branch_coverage=1 00:43:46.159 --rc genhtml_function_coverage=1 00:43:46.159 --rc genhtml_legend=1 00:43:46.159 --rc geninfo_all_blocks=1 00:43:46.159 --rc geninfo_unexecuted_blocks=1 00:43:46.159 00:43:46.159 ' 00:43:46.159 17:59:37 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:43:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:46.159 --rc genhtml_branch_coverage=1 00:43:46.159 --rc genhtml_function_coverage=1 00:43:46.159 --rc genhtml_legend=1 00:43:46.159 --rc geninfo_all_blocks=1 00:43:46.159 --rc geninfo_unexecuted_blocks=1 00:43:46.159 00:43:46.159 ' 00:43:46.159 17:59:37 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:43:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:46.159 --rc genhtml_branch_coverage=1 00:43:46.159 --rc genhtml_function_coverage=1 00:43:46.159 --rc genhtml_legend=1 00:43:46.159 --rc geninfo_all_blocks=1 00:43:46.159 --rc geninfo_unexecuted_blocks=1 00:43:46.159 00:43:46.159 ' 00:43:46.159 17:59:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:46.159 17:59:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:46.159 17:59:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:46.159 17:59:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:46.159 17:59:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:46.159 17:59:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.159 17:59:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.159 17:59:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.159 17:59:37 -- paths/export.sh@5 -- $ export PATH 00:43:46.160 17:59:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.160 17:59:37 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:46.160 17:59:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:46.160 17:59:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403177.XXXXXX 00:43:46.160 17:59:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403177.DLpMx2 00:43:46.160 17:59:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:46.160 17:59:37 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:43:46.160 17:59:37 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:46.160 17:59:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:46.160 17:59:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:46.160 17:59:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:46.160 17:59:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:46.160 17:59:37 -- common/autotest_common.sh@10 -- $ set +x 00:43:46.160 17:59:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:43:46.160 17:59:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:46.160 17:59:37 -- pm/common@17 -- $ local monitor 00:43:46.160 17:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:46.160 17:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:46.160 17:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:46.160 17:59:37 -- pm/common@21 -- $ date +%s 00:43:46.160 17:59:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:46.160 17:59:37 -- pm/common@25 -- $ sleep 1 00:43:46.160 17:59:37 -- pm/common@21 -- $ date +%s 00:43:46.160 17:59:37 -- pm/common@21 -- $ date +%s 00:43:46.160 17:59:37 -- pm/common@21 -- $ date +%s 00:43:46.160 17:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728403177 00:43:46.160 17:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728403177 00:43:46.160 17:59:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728403177 00:43:46.160 17:59:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728403177 00:43:46.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728403177_collect-cpu-load.pm.log 00:43:46.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728403177_collect-vmstat.pm.log 00:43:46.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728403177_collect-cpu-temp.pm.log 00:43:46.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728403177_collect-bmc-pm.bmc.pm.log 00:43:47.242 17:59:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:47.242 17:59:38 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:47.242 17:59:38 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:47.242 17:59:38 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:47.242 17:59:38 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:47.242 17:59:38 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:47.242 17:59:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:47.242 17:59:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:47.242 17:59:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:47.242 17:59:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:47.242 17:59:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:47.242 17:59:38 -- pm/common@44 -- $ pid=723876 00:43:47.242 17:59:38 -- pm/common@50 -- $ kill -TERM 723876 00:43:47.242 17:59:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:47.242 17:59:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:47.242 17:59:38 -- pm/common@44 -- $ pid=723878 00:43:47.242 17:59:38 -- pm/common@50 -- $ kill -TERM 723878 00:43:47.242 17:59:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:47.242 17:59:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:47.242 17:59:38 -- pm/common@44 -- $ pid=723879 00:43:47.242 17:59:38 -- pm/common@50 -- $ kill -TERM 723879 00:43:47.242 17:59:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:47.242 17:59:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:47.242 17:59:38 -- pm/common@44 -- $ pid=723904 00:43:47.242 17:59:38 -- pm/common@50 -- $ sudo -E kill -TERM 723904 00:43:47.242 + [[ -n 40910 ]] 00:43:47.242 + sudo kill 40910 00:43:47.307 [Pipeline] } 00:43:47.322 [Pipeline] // stage 00:43:47.326 [Pipeline] } 00:43:47.338 [Pipeline] // timeout 00:43:47.342 [Pipeline] } 00:43:47.355 [Pipeline] // catchError 00:43:47.359 [Pipeline] } 00:43:47.373 [Pipeline] // wrap 00:43:47.378 [Pipeline] } 00:43:47.389 [Pipeline] // catchError 00:43:47.397 [Pipeline] stage 00:43:47.399 [Pipeline] { (Epilogue) 00:43:47.410 [Pipeline] catchError 00:43:47.412 [Pipeline] { 00:43:47.423 [Pipeline] echo 00:43:47.425 Cleanup processes 00:43:47.430 [Pipeline] sh 00:43:47.727 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:47.727 724029 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:47.727 724586 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:47.741 [Pipeline] sh 00:43:48.030 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:48.030 ++ grep -v 'sudo pgrep' 00:43:48.030 ++ awk '{print $1}' 00:43:48.030 + sudo kill -9 724029 00:43:48.044 [Pipeline] sh 00:43:48.337 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:00.576 [Pipeline] sh 00:44:00.867 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:00.867 Artifacts sizes are good 00:44:00.883 [Pipeline] archiveArtifacts 00:44:00.889 Archiving artifacts 00:44:01.305 [Pipeline] sh 00:44:01.594 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:01.610 [Pipeline] cleanWs 00:44:01.621 [WS-CLEANUP] Deleting project workspace... 00:44:01.621 [WS-CLEANUP] Deferred wipeout is used... 00:44:01.629 [WS-CLEANUP] done 00:44:01.631 [Pipeline] } 00:44:01.650 [Pipeline] // catchError 00:44:01.662 [Pipeline] sh 00:44:01.953 + logger -p user.info -t JENKINS-CI 00:44:01.964 [Pipeline] } 00:44:01.978 [Pipeline] // stage 00:44:01.983 [Pipeline] } 00:44:01.998 [Pipeline] // node 00:44:02.003 [Pipeline] End of Pipeline 00:44:02.041 Finished: SUCCESS